text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 971–981, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Jointly optimizing word representations for lexical and sentential tasks with the C-PHRASE model Nghia The Pham Germ´an Kruszewski Angeliki Lazaridou Marco Baroni Center for Mind/Brain Sciences University of Trento {thenghia.pham|german.kruszewski|angeliki.lazaridou|marco.baroni}@unitn.it Abstract We introduce C-PHRASE, a distributional semantic model that learns word representations by optimizing context prediction for phrases at all levels in a syntactic tree, from single words to full sentences. C-PHRASE outperforms the state-of-theart C-BOW model on a variety of lexical tasks. Moreover, since C-PHRASE word vectors are induced through a compositional learning objective (modeling the contexts of words combined into phrases), when they are summed, they produce sentence representations that rival those generated by ad-hoc compositional models. 1 Introduction Distributional semantic models, that induce vector-based meaning representations from patterns of co-occurrence of words in corpora, have proven very successful at modeling many lexical relations, such as synonymy, co-hyponomy and analogy (Mikolov et al., 2013c; Turney and Pantel, 2010). The recent evaluation of Baroni et al. (2014b) suggests that the C-BOW model introduced by Mikolov et al. (2013a) is, consistently, the best across many tasks.1 Interestingly, C-BOW vectors are estimated with a simple compositional approach: The weights of adjacent words are jointly optimized so that their sum will predict the distribution of their contexts. This is reminiscent of how the parameters of some compositional distributional seman1We refer here not only to the results reported in Baroni et al. (2014b), but also to the more extensive evaluation that Baroni and colleagues present in the companion website (http://clic.cimec.unitn. it/composes/semantic-vectors.html). The experiments there suggest that only the Glove vectors of Pennington et al. (2014) are competitive with C-BOW, and only when trained on a corpus several orders of magnitude larger than the one used for C-BOW. tic models are estimated by optimizing the prediction of the contexts in which phrases occur in corpora (Baroni and Zamparelli, 2010; Guevara, 2010; Dinu et al., 2013). However, these compositional approaches assume that word vectors have already been constructed, and contextual evidence is only used to induce optimal combination rules to derive representations of phrases and sentences. In this paper, we follow through on this observation to propose the new C-PHRASE model. Similarly to C-BOW, C-PHRASE learns word representations by optimizing their joint context prediction. However, unlike in flat, window-based C-BOW, C-PHRASE groups words according to their syntactic structure, and it simultaneously optimizes context-predictions at different levels of the syntactic hierarchy. For example, given training sentence “A sad dog is howling in the park”, C-PHRASE will optimize context prediction for dog, sad dog, a sad dog, a sad dog is howling, etc., but not, for example, for howling in, as these two words do not form a syntactic constituent by themselves. C-PHRASE word representations outperform C-BOW on several word-level benchmarks. In addition, because they are estimated in a compositional way, C-PHRASE word vectors, when combined through simple addition, produce sentence representations that are better than those obtained when adding other kinds of vectors, and competitive against ad-hoc compositional methods on various sentence meaning benchmarks. 2 The C-PHRASE model We start with a brief overview of the models proposed by Mikolov et al. (2013a), as C-PHRASE builds on them. The Skip-gram model derives the vector of a target word by setting its weights to predict the words surrounding it in the corpus. 971 More specifically, the objective function is: 1 T T X t=1 X −c≤j≤c,j̸=0 log p(wt+j|wt) (1) where the word sequence w1, w2, ..., wT is the training corpus and c is the size of the window around the target word wt, consisting of the context words wt+j that must be predicted by the induced vector representation for the target. While Skip-gram learns each word representation separately, the C-BOW model takes their combination into account. More precisely, it tries to predict a context word from the combination of the previous and following words, where the combination method is vector addition. The objective function is: 1 T T X t=1 log p(wt|wt−c..wt−1, wt+1..wt+c) (2) While other distributional models consider sequences of words jointly as context when estimating the parameters for a single word (Agirre et al., 2009; Melamud et al., 2014), C-BOW is unique in that it estimates the weights of a sequence of words jointly, based on their shared context. In this respect, C-BOW extends the distributional hypothesis (Harris, 1954) that words with similar context distributions should have similar meanings to longer sequences. However, the word combinations of C-BOW are not natural linguistic constituents, but arbitrary n-grams (e.g., sequences of 5 words with a gap in the middle). Moreover, the model does not attempt to capture the hierarchical nature of syntactic phrasing, such that big brown dog is a meaningful phrase, but so are its children brown dog and dog. C-PHRASE aims at capturing the same intuition that word combinations with similar context distributions will have similar meaning, but it applies it to syntactically motivated, potentially nested phrases. More precisely, we estimate word vectors such that they and their summed combinations are able to predict the contexts of words, phrases and sentences. The model is formalized as follows. We start from a parsed text corpus T, composed of constituents C[wl, · · · , wr], where wl, · · · , wr are the words spanned by the constituent, located in positions l to r in the corpus. We minimize an objective function analogous to equations (1) and (2), but instead of just using individual words or bags of words to predict context, we use summed vector representations of wellformed constituents at all levels in the syntactic tree to predict the context of these constituents. There are similarities with both CBOW and Skipgram. At the leaf nodes, C-PHRASE acts like Skip-gram, whereas at higher node in the parse tree, it behaves like CBOW model. Concretely, we try to predict the words located within a window cC from every constituent in the parse tree.2 In order to do so, we learn vector representations for words vw by maximizing the sum of the log probabilities of the words in the context window of the well-formed constituents with stochastic gradient descent: X C[wl,··· ,wr]∈T X 1≤j≤cC  log p(wl−j|C[wl, · · · , wr]) + log p(wr+j|C[wl, · · · , wr])  (3) with p theoretically defined as: p(wO|C[wl, · · · , wr]) = exp  v′⊤ wO Pr i=l vwi r−l+1  PW w=1 exp  v′⊤ w Pr i=l vwi r−l+1  where W is the size of the vocabulary, v′ and v denote output (context) and input vectors, respectively, and we take the input vectors to represent the words. In practice, since the normalization constant for the above probability is expensive to compute, we follow Mikolov et al. (2013b) and use negative sampling. We let the context window size cC vary as a function of the height of the constituent in the syntactic tree. The height h(C) of a constituent is given by the maximum number of intermediate nodes separating it from any of the words it dominates (such that h = 0 for words, h = 1 for two-word phrases, etc.). Then, for a constituent of height h(C), C-PHRASE considers cC = c1 + h(C)c2 context words to its left and right (the nonnegative integers c1 and c2 are hyperparameters of the model; with c2 = 0, context becomes constant 2Although here we only use single words as context, the latter can be extended to encompass any sensible linguistic item, e.g., frequent n-grams or, as discussed below, syntactically-mediated expressions 972 Figure 1: C-PHRASE context prediction objective for the phrase small cat and its children. The phrase vector is obtained by summing the word vectors. The predicted window is wider for the higher constituent (the phrase). across heights). The intuition for enlarging the window proportionally to height is that, for shorter phrases, narrower contexts are likely to be most informative (e.g., a modifying adjective for a noun), whereas for longer phrases and sentences it might be better to focus on broader “topical” information spread across larger windows (paragraphs containing sentences about weather might also contain the words rain and sun, but without any tendency for these words to be perfectly adjacent to the target sentences). Figure 1 illustrates the prediction objective for a two-word phrase and its children. Since all constituents (except the topmost) form parts of larger constituents, their representations will be learned both from the objective of predicting their own contexts, and from error propagation from the same objective applied higher in the tree. As a side effect, words, being lower in the syntactic tree, will have their vectors updated more often, and thus might have a greater impact on the learned parameters. This is another reason for varying window size with height, so that the latter effect will be counter-balanced by higher constituents having larger context windows to predict. For lexical tasks, we directly use the vectors induced by C-PHRASE as word representations. For sentential tasks, we simply add the vectors of the words in a sentence to obtain its representation, exploiting the fact that C-PHRASE was trained to predict phrase contexts from the additive combination of their elements. Joint optimization of word and phrase vectors The C-PHRASE hierarchical learning objective can capture, in parallel, generalizations about the contexts of words and phrases at different levels of complexity. This results, as we will see, in better word vectors, presumably because C-PHRASE is trained to predict how the contexts of a word change based on its phrasal collocates (cup will have very different contexts in world cup vs. coffee cup ). At the same time, because the vectors are optimized based on their occurrence in phrases of different syntactic complexity, they produce good sentence representations when they are combined. To the best of our knowledge, C-PHRASE is the first model that is jointly optimized for lexical and compositional tasks. C-BOW uses shallow composition information to learn word vectors. Conversely, some compositional models –e.g., Kalchbrenner et al. (2014), Socher et al. (2013)– induce word representations, that are only optimized for a compositional task and are not tested at the lexical level. Somewhat relatedly to what we do, Hill et al. (2014) evaluated representations learned in a sentence translation task on wordlevel benchmarks. Some a priori justification for treating word and sentence learning as joint problems comes from human language acquisition, as it is obvious that children learn word and phrase meanings in parallel and interactively, not sequentially (Tomasello, 2003). Knowledge-leanness and simplicity For training, C-PHRASE requires a large, syntacticallyparsed corpus (more precisely, it only requires the constituent structure assigned by the parser, as it is blind to syntactic labels). Both large unannotated corpora and efficient pre-trained parsers are available for many languages, making the CPHRASE knowledge demands feasible for practical purposes. There is no need to parse the sentences we want to build representations for at test time, since the component word vectors are simply added. The only parameters of the model are the word vectors; specifically, no extra parameters are needed for composition (composition models such as the one presented in Socher et al. (2012) require an extra parameter matrix for each word in the vocabulary, and even leaner models such as the one of Guevara (2010) must estimate a parameter matrix for each composition rule in the grammar). This makes C-PHRASE as simple as additive and multiplicative composition (Mitchell and 973 Lapata, 2010),3 but C-PHRASE is both more effective in compositional tasks (see evaluation below), and it has the further advantage that it learns its own word vectors, thus reducing the number of arbitrary choices to be made in modeling. Supervision Unlike many recent composition models (Kalchbrenner and Blunsom, 2013; Kalchbrenner et al., 2014; Socher et al., 2012; Socher et al., 2013, among others), the context-prediction objective of C-PHRASE does not require annotated data, and it is meant to provide generalpurpose representations that can serve in different tasks. C-PHRASE vectors can also be used as initialization parameters for fully supervised, task-specific systems. Alternatively, the current unsupervised objective could be combined with task-specific supervised objectives to fine-tune CPHRASE to specific purposes. Sensitivity to syntactic structure During training, C-PHRASE is sensitive to syntactic structure. To cite an extreme example, boy flowers will be joined in a context-predicting phrase in “these are considered [boy flowers]”, but not in “he gave [the boy] [flowers]”. A more common case is that of determiners, that will only occur in phrases that also contain the following word, but not necessarily the preceding one. Sentence composition at test time, on the other hand, is additive, and thus syntax-insensitive. Still, the vectors being combined will reflect syntactic generalizations learned in training. Even if C-PHRASE produces the same representation for red+car and car+red, this representation combines a red vector that, during training, has often occurred in the modifier position of adjective-noun phrases, whereas car will have often occurred in the corresponding head position. So, presumably, the red+car=car+red vector will encode the adjective-noun asymmetry induced in learning. While the model won’t be able to distinguish the rare cases in which car red is genuinely used as a phrase, in realistic scenarios this won’t be a problem, because only red car will be encountered. In this respect, the successes and failures of C-PHRASE can tell us to what extent word order information is truly distinctive in practice, and to what extent it can instead be reconstructed from the typical role that words play in sentences. 3We do not report results for component-wise multiplicative in our evaluation because it performed much worse than addition in all the tasks. Comparison with traditional syntax-sensitive word representations Syntax has often been exploited in distributional semantics for a richer characterization of context. By relying on a syntactic parse of the input corpus, a distributional model can take more informative contexts such as subject-of-eat vs. object-of-eat into account (Baroni and Lenci, 2010; Curran and Moens, 2002; Grefenstette, 1994; Erk and Pad´o, 2008; Levy and Goldberg, 2014a; Pad´o and Lapata, 2007; Rothenh¨ausler and Sch¨utze, 2009). In this approach, syntactic information serves to select and/or enrich the contexts that are used to build representations of target units. On the other hand, we use syntax to determine the target units that we build representations for (in the sense that we jointly learn representations of their constituents). The focus is thus on unrelated aspects of model induction, and we could indeed use syntax-mediated contexts together with our phrasing strategy. Currently, given eat (red apples), we treat eat as window-based context of red apples, but we could also take the context to be object-of-eat. 3 Evaluation 3.1 Data sets Semantic relatedness of words In this classic lexical task, the models are required to quantify the degree of semantic similarity or relatedness of pairs of words in terms of cosines between the corresponding vectors. These scores are then compared to human gold standards. Performance is assessed by computing the correlation between system and human scores (Spearman correlation in all tasks except rg, where it is customary to report Pearson). We used, first of all, the MEN (men) data set of Bruni et al. (2014), that is split into 1K pairs for training/development, and 1K pairs for testing. We used the training set to tune the hyperparameters of our model, and report performance on the test set. The C-BOW model of Baroni et al. (2014b) achieved state-of-the art performance on MEN test. We also evaluate on the widely used WordSim353 set introduced by Finkelstein et al. (2002), which consists of 353 word pairs. The WordSim353 data were split by Agirre et al. (2009) into similarity (wss) and relatedness (wsr) subsets, focusing on strictly taxonomic (television/radio) vs. broader topical cases (Maradona/football), respectively. State-of-theart performance on both sets is reported by Baroni 974 et al. (2014b), with the C-BOW model. We further consider the classic data set of Rubenstein and Goodenough (1965) (rg), consisting of 65 noun pairs. We report the state-of-the-art from Hassan and Mihalcea (2011), which exploited Wikipedia’s linking structure. Concept categorization Systems are asked to group a set of nominal concepts into broader categories (e.g. arthritis and anthrax into illness; banana and grape into fruit). As in previous work, we treat this as an unsupervised clustering task. We feed the similarity matrix produced by a model for all concepts in a test set to the CLUTO toolkit (Karypis, 2003), that clusters them into n groups, where n is the number of categories. We use standard CLUTO parameters from the literature, and quantify performance by cluster purity with respect to the gold categories. The AlmuharebPoesio benchmark (Almuhareb, 2006) (ap) consists of 402 concepts belonging to 21 categories. A distributional model based on carefully chosen syntactic relations achieved top ap performance (Rothenh¨ausler and Sch¨utze, 2009). The ESSLLI 2008 data set (Baroni et al., 2008) (esslli) consists of 6 categories and 42 concepts. State of the art was achieved by Katrenko and Adriaans (2008) by using full-Web queries and manually crafted patterns. Semantic analogy The last lexical task we pick is analogy (an), introduced in Mikolov et al. (2013c). We focus on their semantic challenge, containing about 9K questions. In each question, the system is given a pair exemplifying a relation (man/king) and a test word (woman); it is then asked to find the word (queen) that instantiates the same relation with the test word as that of the example pair. Mikolov et al. (2013c) subtract the vector of the first word in a pair from the second, add the vector of the test word and look for the nearest neighbor of the resulting vector (e.g., find the word whose vector is closest to king man + woman). We follow the method introduced by Levy and Goldberg (2014b), which returns the word x maximizing cos(king,x)×cos(woman,x) cos(man,x) . This method yields better results for all models. Performance is measured by accuracy in retrieving the correct answer (in our search space of 180K words). The current state of the art on the semantic part and on the whole data set was reached by Pennington et al. (2014), who trained their word representations on a huge corpus consisting of 42B words. Sentential semantic relatedness Similarly to word relatedness, composed sentence representations can be evaluated against benchmarks where humans provided relatedness/similarity scores for sentence pairs (sentences with high scores, such as “A person in a black jacket is doing tricks on a motorbike”/“A man in a black jacket is doing tricks on a motorbike” from the SICK data-set, tend to be near-paraphrases). Following previous work on these data sets, Pearson correlation is our figure of merit, and we report it between human scores and sentence vector cosine similarities computed by the models. SICK (Marelli et al., 2014) (sick-r) was created specifically for the purpose of evaluating compositional models, focusing on linguistic phenomena such as lexical variation and word order. Here we report performance of the systems on the test part of the data set, which contains 5K sentence pairs. The top performance (from the SICK SemEval shared task) was reached by Zhao et al. (2014) using a heterogeneous set of features that include WordNet and extra training corpora. Agirre et al. (2012) and Agirre et al. (2013) created two collections of sentential similarities consisting of subsets coming from different sources. From these, we pick the Microsoft Research video description dataset (msrvid), where near paraphrases are descriptions of the same short video, and the OnWN 2012 (onwn1) and OnWN 2013 (onwn2) data sets (each of these sets contains 750 pairs). The latter are quite different from other sentence relatedness benchmarks, since they compare definitions for the same or different words taken from WordNet and OntoNotes: these glosses often are syntactic fragments (“cause something to pass or lead somewhere”), rather than full sentences. We report top performance on these tasks from the respective shared challenges, as summarized by Agirre et al. (2012) and Agirre et al. (2013). Again, the top systems use feature-rich, supervised methods relying on distributional similarity as well as other sources, such as WordNet and named entity recognizers. Sentential entailment Detecting the presence of entailment between sentences or longer passages is one of the most useful features that the computational analysis of text could provide (Dagan et al., 2009). We test our model on the SICK 975 entailment task (sick-e) (Marelli et al., 2014). All SICK sentence pairs are labeled as ENTAILING (“Two teams are competing in a football match”/”Two groups of people are playing football”), CONTRADICTING (“The brown horse is near a red barrel at the rodeo”/“The brown horse is far from a red barrel at the rodeo”) or NEUTRAL (“A man in a black jacket is doing tricks on a motorbike”/”A person is riding the bicycle on one wheel”). For each model, we train a simple SVM classifier based on 2 features: cosine similarity between the two sentence vectors, as given by the models, and whether the sentence pair contains a negation word (the latter has been shown to be a very informative feature for SICK entailment). The current state-of-the-art is reached by Lai and Hockenmaier (2014), using a much richer set of features, that include WordNet, the denotation graph of Young et al. (2014) and extra training data from other resources. Sentiment analysis Finally, as sentiment analysis has emerged as a popular area of application for compositional models, we test our methods on the Stanford Sentiment Treebank (Socher et al., 2013) (sst), consisting of 11,855 sentences from movie reviews, using the coarse annotation into 2 sentiment degrees (negative/positive). We follow the official split into train (8,544), development (1,101) and test (2,210) parts. We train an SVM classifier on the training set, using the sentence vectors composed by a model as features, and report accuracy on the test set. State of the art is obtained by Le and Mikolov (2014) with the Paragraph Vector approach we describe below. 3.2 Model implementation The source corpus we use to build the lexical vectors is created by concatenating three sources: ukWaC,4 a mid-2009 dump of the English Wikipedia5 and the British National Corpus6 (about 2.8B words in total). We build vectors for the 180K words occurring at least 100 times in the corpus. Since our training procedure requires parsed trees, we parse the corpus using the Stanford parser (Klein and Manning, 2003). C-PHRASE has two hyperparameters (see Section 2 above), namely basic window size (c1) and height-dependent window enlargement factor (c2). 4http://wacky.sslmit.unibo.it 5http://en.wikipedia.org 6http://www.natcorp.ox.ac.uk Moreover, following Mikolov et al. (2013b), during training we sub-sample less informative, very frequent words: this option is controlled by a parameter t, resulting in aggressive subsampling of words with relative frequency above it. We tune on MEN-train, obtaining c1 = 5, c2 = 2 and t = 10−5. As already mentioned, sentence vectors are built by summing the vectors of the words in them. In lexical tasks, we compare our model to the best C-BOW model from Baroni et al. (2014b),7 and to a Skip-gram model built using the same hyperparameters as C-PHRASE (that also led to the best MEN-train results for Skip-gram). In sentential tasks, we compare our model against adding the best C-BOW vectors pretrained by Baroni and colleagues,8 and adding our Skip-gram vectors. We compare the additive approaches to two sophisticated composition models. The first is the Practical Lexical Function (PLF) model of Paperno et al. (2014). This is a linguistically motivated model in the tradition of the “functional composition” approaches of Coecke et al. (2010) and Baroni et al. (2014a), and the only model in this line of research that has been shown to empirically scale up to real-life sentence challenges. In short, in the PLF model all words are represented by vectors. Words acting as argument-taking functions (such as verbs or adjectives) are also associated to one matrix for each argument they take (e.g., each transitive verb has a subject and an object matrix). Vector representations of arguments are recursively multiplied by function matrices, following the syntactic tree up to the top node. The final sentence representation is obtained by summing all the resulting vectors. The PLF approach requires syntactic parsing both in training and in testing and, more cumbersomely, to train a separate matrix for each argument slot of each function word (the training objective is again a context-predicting one). Here, we report PLF results on msrvid and onwn2 from Paperno et al. (2014), noting that they also used two simple but precious cues (word overlap and sentence length) we do not adopt here. We used their pre-trained vectors and matrices also for the SICK challenges, while the number of new ma7For fairness, we report their results when all tasks were evaluated with the same set of parameters, tuned on rg: this is row 8 of their Table 2. 8http://clic.cimec.unitn.it/composes/ semantic-vectors.html 976 trices to estimate made it too time-consuming to implement this model in the onwn1 and sst tasks. Finally, we test the Paragraph Vector (PV) approach recently proposed by Le and Mikolov (2014). Under PV, sentence representations are learned by predicting the words that occur in them. This unsupervised method has been shown by the authors to outperform much more sophisticated, supervised neural-network-based composition models on the sst task. We use our own implementation for this approach. Unlike in the original experiments, we found the PV-DBOW variant of PV to consistently outperform PV-DM, and so we report results obtained with the former. Note that PV-DBOW aims mainly at providing representations for sentences, not words. When we do not need to induce vectors for sentences in the training corpus, i.e., only train to learn single words’ representations and the softmax weights, PV-DBOW essentially reduces to Skipgram. Therefore, we produce the PV-DBOW vectors for the sentences in the evaluation data sets using the softmax weights learned by Skip-gram. However, it is not clear that, if we were to train PVDBOW jointly for words and sentences, we would get word vectors as good as those that Skip-gram induces. 4 Results The results on the lexical tasks reported in Table 1 prove that C-PHRASE is providing excellent word representations, (nearly) as good or better than the C-BOW vectors of Baroni and colleagues in all cases, except for ap. Whenever C-PHRASE is not close to the state of the art results, the latter relied on richer knowledge sources and/or much larger corpora (ap, esslli, an). Turning to the sentential tasks (Table 2), we first remark that using high-quality word vectors (such as C-BOW) and summing them leads to good results in all tasks, competitive with those obtained with more sophisticated composition models. This confirms the observation made by Blacoe and Lapata (2012) that simple-minded composition models are not necessarily worse than advanced approaches. Still, C-PHRASE is consistently better than C-BOW in all tasks, except sst, where the two models reach the same performance level. C-PHRASE is outperforming PV on all tasks except sick-e, where the two models have the same performance, and onwn2, where PV is slightly better. C-PHRASE is outperforming PLF by a large margin on the SICK sets, whereas the two models are equal on msrvid, and PLF better on onwn2. Recall, however, that on the latter two benchmarks PLF used extra word overlap and sentence length features, so the comparison is not entirely fair. The fact that state-of-the-art performance is well above our models is not surprising, since the SOA systems are invariably based on a wealth of knowledge sources, and highly optimized for a task. To put some of our results in a broader perspective, C-PHRASE’s sick-r performance is 1% better than the median result of systems that participated in the SICK SemEval challenge, and comparable to that of Beltagy et al. (2014), who entered the competition with a system combining distributional semantics with a supervised probabilistic soft logic system. For sick-e (the entailment task), C-PHRASE’s performance is less than one point below the median of the SemEval systems, and slightly above that of the Stanford submission, that used a recursive neural network with a tensor layer. Finally, the performance of all our models, including PV, on sst is remarkably lower than the state-of-the-art performance of PV as reported by Le and Mikolov (2014). We believe that the crucial difference is that these authors estimated PV vectors specifically on the sentiment treebank training data, thus building ad-hoc vectors encoding the semantics of movie reviews. We leave it to further research to ascertain whether we could better fine-tune our models to sst by including the sentiment treebank training phrases in our source corpus. Comparing vector lengths of C-BOW and CPHRASE We gather some insight into how the C-PHRASE objective might adjust word representations for composition with respect to C-BOW by looking at how the length of word vectors changes across the two models.9 While this is a very coarse measure, if a word vector is much longer/shorter (relative to the length of other word vectors of the same model) for C-PHRASE vs. C-BOW, it means that, when sentences are composed by addition, the effect of the word on the resulting sentence representation will be stronger/weaker. 9We performed the same analysis for C-PHRASE and Skip-gram, finding similar general trends to the ones we report for C-PHRASE and C-BOW. 977 men wss wsr rg ap esslli an Skip-gram 78 77 66 80 65 82 63 C-BOW 80 78 68 83 71 77 68 C-PHRASE 79 79 70 83 65 84 69 SOA 80 80 70 86 79 91 82 Table 1: Lexical task performance. See Section 3.1 for figures of merit (all in percentage form) and state-of-the-art references. C-BOW results (tuned on rg) are taken from Baroni et al. 2014b. sick-r sick-e msrvid onwn1 onwn2 sst Skip-gram 70 72 74 66 62 78 C-BOW 70 74 74 69 63 79 C-PHRASE 72 75 79 70 65 79 PLF 57 72 79 NA 67 NA PV 67 75 77 66 66 77 SOA 83 85 88 71 75 88 Table 2: Sentential task performance. See Section 3.1 for figures of merit (all in percentage form) and state-of-the-art references. The PLF results on msrvid and onwn2 are taken from Paperno et al. 2014. The relative-length-difference test returns the following words as the ones that are most severely de-emphasized by C-PHRASE compared to CBOW: be, that, an, not, they, he, who, when, well, have. Clearly, C-PHRASE is weighting down grammatical terms that tend to be contextagnostic, and will be accompanied, in phrases, by more context-informative content words. Indeed, the list of terms that are instead emphasized by C-PHRASE include such content-rich, monosemous words as gnawing, smackdown, demographics. This is confirmed by a POS-level analysis that indicates that the categories that are, on average, most de-emphasized by C-PHRASE are: determiners, modals, pronouns, prepositions and (more surprisingly) proper nouns. The ones that are, in relative terms, more emphasized are: -ing verb forms, plural and singular nouns, adjectives and their superlatives. While this reliance on content words to the detriment of grammatical terms is not always good for sentential tasks (“not always good” means something very different from “always good”!), the convincing comparative performance of C-PHRASE in such tasks suggests that the semantic effect of grammatical terms is in any case beyond the scope of current corpusbased models, and often not crucial to attain competitive results on typical benchmarks (think, e.g., of how little modals, one of the categories that CPHRASE downplays the most, will matter when detecting paraphrases that are based on picture descriptions). We also applied the length difference test to words in specific categories, finding similar patterns. For example, looking at -ly adverbs only, those that are de-emphasized the most by CPHRASE are recently, eventually, originally, notably and currently – all adverbs denoting temporal factors or speaker attitude. On the other hand, the ones that C-PHRASE lengthens the most, relative to C-BOW, are clinically, divinely, ecologically, noisily and theatrically: all adverbs with more specific, content-word-like meanings, that are better captured by distributional methods, and are likely to have a bigger impact on tasks such as paraphrasing or entailment. Effects of joint optimization at word and phrase levels As we have argued before, CPHRASE is able to obtain good word representations presumably because it learns to predict how the context of a word changes in the presence of different collocates. To gain further insight into this claim, we looked at the nearest neighbours of some example terms, like neural, network and neural network (the latter, composed by addition) both in C-PHRASE and C-BOW. The results for this particular example can be appreciated in Table 3. Interestingly, while for C-BOW we observe some confusion between the meaning of the individual words and the phrase, C-PHRASE seems to provide more orthogonal representations for the lexical items. For example, neural in C978 C-BOW C-PHRASE neural network neural network neural network neural network neuronal networks network neuronal networks network neurons superjanet4 neural cortical internetwork neural hopfield backhaul networks connectionist wans perceptron cortical fiber-optic hopfield neurophysiological network. networks connectionist point-to-multipoint packet-switched sensorimotor multicasting hebbian feed-forward nsfnet small-world sensorimotor nsfnet neurons feedforward multi-service local-area neocortex networking neocortex neuron circuit-switched superjanet4 electrophysiological tymnet connectionist backpropagation wide-area neuronal neurobiological x.25 neuronal Table 3: Nearest neighbours of neural, network and neural network both for C-BOW and C-PHRASE BOW contains neighbours that fit well with neural network, like hopfield, connectionist and feedforward. Conversely, neural network has neighbours that correspond to network like local-area and packet-switched. In contrast, C-PHRASE neighbours for neural are mostly related to the brain sense of the word, e.g., cortical, neurophysiological, etc. (with the only exception of connectionist). The first neighbour of neural network, excluding its own component words, quite sensibly, is perceptron. 5 Conclusion We introduced C-PHRASE, a distributional semantic model that is trained on the task of predicting the contexts surrounding phrases at all levels of a hierarchical sentence parse, from single words to full sentences. Consequently, word vectors are induced by taking into account not only their contexts, but also how co-occurrence with other words within a syntactic constituent is affecting these contexts. C-PHRASE vectors outperform state-of-the-art C-BOW vectors in a wide range of lexical tasks. Moreover, because of the way they are induced, when C-PHRASE vectors are summed, they produce sentence representations that are as good or better than those obtained with sophisticated composition methods. C-PHRASE is a very parsimonious approach: The only major resource required, compared to a completely knowledge-free, unsupervised method, is an automated parse of the training corpus (but no syntactic labels are required, nor parsing at test time). C-PHRASE has only 3 hyperparameters and no composition-specific parameter to tune and store. Having established a strong empirical baseline with this parsimonious approach, in future research we want to investigate the impact of possible extensions on both lexical and sentential tasks. When combining the vectors, either for induction or composition, we will try replacing plain addition with other operations, starting with something as simple as learning scalar weights for different words in a phrase (Mitchell and Lapata, 2010). We also intend to explore more systematic ways to incorporate supervised signals into learning, to fine-tune C-PHRASE vectors to specific tasks. On the testing side, we are fascinated by the good performance of additive models, that (at test time, at least) do not take word order nor syntactic structure into account. We plan to perform a systematic analysis of both existing benchmarks and natural corpus data, both to assess the actual impact that such factors have on the aspects of meaning we are interested in (take two sentences in an entailment relation: how often does shuffling the words in them make it impossible to detect entailment?), and to construct new benchmarks that are more challenging for additive methods. The C-PHRASE vectors described in this paper are made publicly available at: http://clic. cimec.unitn.it/composes/. Acknowledgments We thank Gemma Boleda and the anonymous reviewers for useful comments. We acknowledge ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of HLT-NAACL, pages 19–27, Boulder, CO. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 Task 6: a 979 pilot on semantic textual similarity. In Proceedings of *SEM, pages 385–393, Montreal, Canada. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic Textual Similarity. In Proceedings of *SEM, pages 32–43, Atlanta, GA. Abdulrahman Almuhareb. 2006. Attributes in Lexical Acquisition. Phd thesis, University of Essex. Marco Baroni and Alessandro Lenci. 2010. Distributional Memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP, pages 1183–1193, Boston, MA. Marco Baroni, Stefan Evert, and Alessandro Lenci, editors. 2008. Bridging the Gap between Semantic Theory and Computational Simulations: Proceedings of the ESSLLI Workshop on Distributional Lexical Semantic. FOLLI, Hamburg. Marco Baroni, Raffaella Bernardi, and Roberto Zamparelli. 2014a. Frege in space: A program for compositional distributional semantics. Linguistic Issues in Language Technology, 9(6):5–110. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014b. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of ACL, pages 238–247, Baltimore, MD. Islam Beltagy, Stephen Roller, Gemma Boleda, Katrin Erk, and Raymond Mooney. 2014. UTexas: Natural language semantics using distributional semantics and probabilistic logic. In Proceedings of SemEval, pages 796–801, Dublin, Ireland, August. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of EMNLP, pages 546–556, Jeju Island, Korea. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1–47. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis, 36:345–384. James Curran and Marc Moens. 2002. Improvements in automatic thesaurus extraction. In Proceedings of the ACL Workshop on Unsupervised Lexical Acquisition, pages 59–66, Philadelphia, PA. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2009. Recognizing textual entailment: rationale, evaluation and approaches. Natural Language Engineering, 15:459–476. Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013. General estimation and evaluation of compositional distributional semantic models. In Proceedings of ACL Workshop on Continuous Vector Space Models and their Compositionality, pages 50– 58, Sofia, Bulgaria. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of EMNLP, pages 897–906, Honolulu, HI. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer, Boston, MA. Emiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of the EMNLP GEMS Workshop, pages 33–37, Uppsala, Sweden. Zellig Harris. 1954. Distributional structure. Word, 10(2-3):1456–1162. Samer Hassan and Rada Mihalcea. 2011. Semantic relatedness using salient semantic analysis. In Proceedings of AAAI, pages 884–889, San Francisco, CA. Felix Hill, KyungHyun Cho, S´ebastien Jean, Coline Devin, and Yoshua Bengio. 2014. Not all neural embeddings are born equal. In Proceedings of the NIPS Learning Semantics Workshop, Montreal, Canada. Published online: https://sites.google.com/site/ learningsemantics2014/. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In Proceedings of ACL Workshop on Continuous Vector Space Models and their Compositionality, pages 119–126, Sofia, Bulgaria. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of ACL, pages 655–665, Baltimore, MD. George Karypis. 2003. CLUTO: A clustering toolkit. Technical Report 02-017, University of Minnesota Department of Computer Science. Sophia Katrenko and Pieter Adriaans. 2008. Qualia structures and their impact on the concrete noun categorization task. In Proceedings of the ESSLLI Workshop on Distributional Lexical Semantics, pages 17–24, Hamburg, Germany. 980 Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL, pages 423–430, Sapporo, Japan. Alice Lai and Julia Hockenmaier. 2014. Illinois-lh: A denotational and distributional approach to semantics. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 329–334, Dublin, Ireland, August. Association for Computational Linguistics and Dublin City University. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053. Omer Levy and Yoav Goldberg. 2014a. Dependencybased word embeddings. In Proceedings of ACL (Volume 2: Short Papers), pages 302–308, Baltimore, Maryland. Omer Levy and Yoav Goldberg. 2014b. Linguistic regularities in sparse and explicit word representations. In Proceedings of CoNLL, pages 171–180, Ann Arbor, MI. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. SemEval-2014 Task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of SemEval, pages 1–8, Dublin, Ireland. Oren Melamud, Ido Dagan, Jacob Goldberger, Idan Szpektor, and Deniz Yuret. 2014. Probabilistic modeling of joint-context in distributional similarity. In Proceedings of CoNLL, pages 181–190, Ann Arbor, MI. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. http://arxiv.org/ abs/1301.3781/. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119, Lake Tahoe, NV. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of NAACL, pages 746–751, Atlanta, Georgia. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Sebastian Pad´o and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. Denis Paperno, Nghia The Pham, and Marco Baroni. 2014. A practical and linguistically-motivated approach to compositional distributional semantics. In Proceedings of ACL, pages 90–99, Baltimore, MD. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543, Doha, Qatar. Klaus Rothenh¨ausler and Hinrich Sch¨utze. 2009. Unsupervised classification with dependency based word spaces. In Proceedings of the EACL GEMS Workshop, pages 17–24, Athens, Greece. Herbert Rubenstein and John Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Richard Socher, Brody Huval, Christopher Manning, and Andrew Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP, pages 1201–1211, Jeju Island, Korea. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pages 1631–1642, Seattle, WA. Michael Tomasello. 2003. Constructing a Language: A Usage-Based Theory of Language Acquisition. Harvard University Press, Cambridge, MA. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Jiang Zhao, Tiantian Zhu, and Man Lan. 2014. Ecnu: One stone two birds: Ensemble of heterogenous measures for semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 271–277, Dublin, Ireland, August. Association for Computational Linguistics and Dublin City University. 981
2015
94
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 982–991, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Robust Subgraph Generation Improves Abstract Meaning Representation Parsing Keenon Werling Stanford University [email protected] Gabor Angeli Stanford University [email protected] Christopher D. Manning Stanford University [email protected] Abstract The Abstract Meaning Representation (AMR) is a representation for opendomain rich semantics, with potential use in fields like event extraction and machine translation. Node generation, typically done using a simple dictionary lookup, is currently an important limiting factor in AMR parsing. We propose a small set of actions that derive AMR subgraphs by transformations on spans of text, which allows for more robust learning of this stage. Our set of construction actions generalize better than the previous approach, and can be learned with a simple classifier. We improve on the previous state-of-the-art result for AMR parsing, boosting end-to-end performance by 3 F1 on both the LDC2013E117 and LDC2014T12 datasets. 1 Introduction The Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a rich, graph-based language for expressing semantics over a broad domain. The formalism is backed by a large datalabeling effort, and it holds promise for enabling a new breed of natural language applications ranging from semantically aware MT to rich broaddomain QA over text-based knowledge bases. Figure 1 shows an example AMR for “he gleefully ran to his dog Rover,” and we give a brief introduction to AMR in Section 2. This paper focuses on AMR parsing, the task of mapping a natural language sentence into an AMR graph. We follow previous work (Flanigan et al., 2014) in dividing AMR parsing into two steps. The first step is concept identification, which generates AMR nodes from text, and which we’ll refer to as NER++ (Section 3.1). The second step is relation Figure 1: The AMR graph for He gleefully ran to his dog Rover. We show that improving the generation of low level subgraphs (e.g., Rover generating name :op1 −−→“Rover”) significantly improves end-to-end performance. identification, which adds arcs to link these nodes into a fully connected AMR graph, which we’ll call SRL++ (Section 3.2). We observe that SRL++ is not the hard part of AMR parsing; rather, much of the difficulty in AMR is generating high accuracy concept subgraphs from the NER++ component. For example, when the existing AMR parser JAMR (Flanigan et al., 2014) is given a gold NER++ output, and must only perform SRL++ over given subgraphs it scores 80 F1 – nearly the inter-annotator agreement of 83 F1, and far higher than its end to end accuracy of 59 F1. SRL++ within AMR is relatively easy given a perfect NER++ output, because so much pressure is put on the output of NER++ to carry meaningful information. For example, there’s a strong typecheck feature for the existence and type of any arc just by looking at its end-points, and syntactic dependency features are very informative for removing any remaining ambiguity. If a system is con982 Figure 2: A graphical explanation of our method. We represent the derivation process for He gleefully ran to his dog Rover. First the tokens in the sentence are labeled with derivation actions, then these actions are used to generate AMR subgraphs, which are then stitched together to form a coherent whole. sidering how to link the node run-01 in Figure 1, the verb-sense frame for “run-01” leaves very little uncertainty for what we could assign as an ARG0 arc. It must be a noun, which leaves either he or dog, and this is easily decided in favor of he by looking for an nsubj arc in the dependency parse. The primary contribution of this paper is a novel approach to the NER++ task, illustrated in Figure 2. We notice that the subgraphs aligned to lexical items can often be generated from a small set of generative actions which generalize across tokens. For example, most verbs generate an AMR node corresponding to the verb sense of the appropriate PropBank frame – e.g., run generates run-01 in Figure 1. This allows us to frame the NER++ task as the task of classifying one of a small number of actions for each token, rather than choosing a specific AMR subgraph for every token in the sentence. Our approach to the end-to-end AMR parsing task is therefore as follows: we define an action space for generating AMR concepts, and create a classifier for classifying lexical items into one of these actions (Section 4). This classifier is trained from automatically generated alignments between the gold AMR trees and their associated sentences (Section 5), using an objective which favors alignment mistakes which are least harmful to the NER++ component. Finally, the concept subgraphs are combined into a coherent AMR parse using the maximum spanning connected subgraph algorithm of Flanigan et al. (2014). We show that our approach provides a large boost to recall over previous approaches, and that end to end performance is improved from 59 to 62 smatch (an F1 measure of correct AMR arcs; see Cai and Knight (2013)) when incorporated into the SRL++ parser of Flanigan et al. (2014). When evaluating the performance of our action classifier in isolation, we obtain an action classification accuracy of 84.1%. 2 The AMR Formalism AMR is a language for expressing semantics as a rooted, directed, and potentially cyclic graph, where nodes represent concepts and arcs are relationships between concepts. AMR is based on neo-Davidsonian semantics, (Davidson, 1967; Parsons, 1990). The nodes (concepts) in an AMR graph do not have to be explicitly grounded in the source sentence, and while such an alignment is often generated to train AMR parsers, it is not provided in the training corpora. The semantics of nodes can represent lexical items (e.g., dog), sense tagged lexical items (e.g., run-01), type markers (e.g., date-entity), and a host of other phenomena. The edges (relationships) in AMR describe one of a number of semantic relationships between concepts. The most salient of these is semantic role labels, such as the ARG0 and destination arcs in Figure 2. However, often these arcs define 983 Figure 3: AMR representation of the word sailor, which is notable for breaking the word up into a self-contained multi-node unit unpacking the derivational morphology of the word. a semantics more akin to syntactic dependencies (e.g., mod standing in for adjective and adverbial modification), or take on domain-specific meaning (e.g., the month, day, and year arcs of a dateentity). To introduce AMR and its notation in more detail, we’ll unpack the translation of the sentence “he gleefully ran to his dog Rover.” We show in Figure 1 the interpretation of this sentence as an AMR graph. The root node of the graph is labeled run-01, corresponding to the PropBank (Palmer et al., 2005) definition of the verb ran. run-01 has an outgoing ARG0 arc to a node he, with the usual PropBank semantics. The outgoing mod edge from run-01 to glee takes a general purpose semantics corresponding to adjective, adverbial, or other modification of the governor by the dependent. We note that run-01 has a destination arc to dog. The label for destination is taken from a finite set of special arc sense tags similar to the preposition senses found in (Srikumar, 2013). The last portion of the figure parses dog to a node which serves as a type marker similar to named entity types, and Rover into the larger subgraph indicating a concept with name “Rover.” 2.1 AMR Subgraphs The mapping from tokens of a sentence to AMR nodes is not one-to-one. A single token or span of tokens can generate a subgraph of AMR consisting of multiple nodes. These subgraphs can logically be considered the expression of a single concept, and are useful to treat as such (e.g., see Section 3.1). Many of these multi-node subgraphs capture structured data such as time expressions, as in FigFigure 4: AMR representation of the span January 1, 2008, an example of how AMR can represent structured data by creating additional nodes such as date-entity to signify the presence of special structure. ure 4. In this example, a date-entity node is created to signify that this cluster of nodes is part of a structured sub-component representing a date, where the nodes and arcs within the component have specific semantics. This illustrates a broader recurring pattern in AMR: an artificial node may, based on its title, have expected children with special semantics. A particularly salient example of this pattern is the name node (see “Rover” in Figure 1) which signifies that all outgoing arcs with label op comprise the tokens of a name object. The ability to decouple the meaning representation of a lexical item from its surface form allows for rich semantic interpretations of certain concepts in a sentence. For example, the token sailor is represented in Figure 3 by a concept graph representing a person who performs the action sail01. Whereas often the AMR node aligned to a span of text is a straightforward function of the text, these cases remain difficult to capture in a principled way beyond memorizing mappings between tokens and subgraphs. 3 Task Decomposition To the best of our knowledge, the JAMR parser is the only published end-to-end AMR parser at the time of publication. An important insight in JAMR is that AMR parsing can be broken into two distinct tasks: (1) NER++ (concept identification): the task of interpreting what entities are being referred to in the text, realized by generating the best AMR subgraphs for a given set of tokens, and (2) SRL++ (relation identification): the task of discovering what relationships exist between entities, realized by taking the disjoint subgraphs generated by NER++ and creating a fullyconnected graph. We describe both tasks in more 984 detail below. 3.1 NER++ Much of the difficulty of parsing to AMR lies in generating local subgraphs representing the meaning of token spans. For instance, the formalism implicitly demands rich notions of NER, lemmatization, word sense disambiguation, number normalization, and temporal parsing; among others. To illustrate, a correct parse of the sentence in Figure 2 requires lemmatization (gleefully →glee), word sense tagging (run →run-01), and open domain NER (i.e., Rover), Furthermore, many of the generated subgraphs (e.g., sailor in Figure 3) have rich semantics beyond those produced by standard NLP systems. Formally, NER++ is the task of generating a disjoint set of subgraphs representing the meanings of localized spans of words in the sentence. For NER++, JAMR uses a simple Viterbi sequence model to directly generate AMR-subgraphs from memorized mappings of text spans to subgraphs. This paper’s main contribution, presented in Section 4, is to make use of generative actions to generate these subgraphs, rather than appealing to a memorized mapping. 3.2 SRL++ The second stage of the AMR decomposition consists of generating a coherent graph from the set of disjoint subgraphs produced by NER++. Whereas NER++ produces subgraphs whose arcs encode domain-specific semantics (e.g., month), the arcs in SRL++ tend to have generally applicable semantics. For example, the many arcs encode conventional semantic roles (e.g., ARG0 and destination in Figure 2), or a notion akin to syntactic dependencies (e.g., mod and poss in Figure 2). For SRL++, JAMR uses a variation of the maximum spanning connected graph algorithm augmented by dual decomposition to impose linguistically motivated constraints on a maximum likelihood objective. 4 A Novel NER++ Method The training sets currently available for AMR are not large. To illustrate, 38% of the words in the LDC2014E113 dev set are unseen during training time. With training sets this small, memorizationbased approaches are extremely brittle. We remove much of the necessity to memorize mappings in NER++ by partitioning the AMR subgraph search space in terms of the actions needed to derive a node from its aligned token. At test time we do a sequence labeling of input tokens with these actions, and then deterministically derive the AMR subgraphs from spans of tokens by applying the transformation decreed by their actions. We explain in Section 4.1 how exactly we manage this partition, and in Section 4.3 how we create training data from existing resources to setup and train an action-type classifier. 4.1 Derivation actions We partition the AMR subgraph space into a set of 9 actions, each corresponding to an action that will be taken by the NER++ system if a token receives this classification. IDENTITY This action handles the common case that the title of the node corresponding to a token is identical to the source token. To execute the action, we take the lowercased version of the token to be the title of the corresponding node. NONE This action corresponds to ignoring this token, in the case that the node should not align to any corresponding AMR fragment. VERB This action captures the verb-sense disambiguation feature of AMR. To execute on a token, we find the most similar verb in PropBank based on Jaro-Winkler distance, and adopt its most frequent sense. This serves as a reasonable baseline for word sense disambiguation, although of course accuracy would be expected to improve if a sophisticated system were incorporated. VALUE This action interprets a token by its integer value. The AMR representation is sensitive to the difference between a node with a title of 5 (the integer value) and “5” or “five” – the string value. This is a rare action, but is nonetheless distinct from any of the other classes. We execute this action by extracting an integer value with a regex based number normalizer, and using the result as the title of the generated node. LEMMA AMR often performs stemming and part-of-speech transformations on the source token in generating a node. For example, we get glee from gleefully. We capture this by a LEMMA action, which is executed by using the lemma of the source token as the generated node title. Note that this does not capture all lemmatizations, as 985 there are often discrepancies between the lemma generated by the lemmatizer and the correct AMR lemma. NAME AMR often references names with a special structured data type: the name construction. For example, Rover in Figure 1. We can capture this phenomenon on unseen names by attaching a created name node to the top of a span. PERSON A variant of the NAME action, this action produces a subgraph identical to the NAME action, but adds a node person as a parent. This is, in effect, a name node with an implicit entity type of person. Due to discrepancies between the output of our named entity tagger and the richer AMR named entity ontology, we only apply this tag to the person named entity tag. DATE The most frequent of the structured data type in the data, after name, is the date-entity construction (for an example see Figure 4). We deterministically take the output of SUTime (Chang and Manning, 2012) and convert it into the dateentity AMR representation. DICT This class serves as a back-off for the other classes, implementing an approach similar to Flanigan et al. (2014). In particular, we memorize a simple mapping from spans of text (such as sailor) to their corresponding most frequently aligned AMR subgraphs in the training data (i.e., the graph in Figure 3). See Section 5 for details on the alignment process. At test time we can do a lookup in this dictionary for any element that gets labeled with a DICT action. If an entry is not found in the mapping, we back off to the second most probable class proposed by the classifier. It is worth observing at this point that our actions derive much of their power from the similarity between English words and their AMR counterparts; creating an analogue of these actions for other languages remains an open problem. 4.2 Action Reliability In many cases, multiple actions could yield the same subgraph when applied to a node. In this section we introduce a method for resolving this ambiguity based on comparing the reliability with which actions generate the correct subgraph, and discuss implications. Even given a perfect action classification for a token, certain action executions can introduce Figure 5: Reliability of each action. The top row are actions which are deterministic; the second row occasionally produce errors. DICT is the least preferred action, with a relatively high error rate. errors. Some of our actions are entirely deterministic in their conversion from the word to the AMR subgraph (e.g., IDENTITY), but others are prone to making mistakes in this conversion (e.g., VERB, DICT). We define the notion of action reliability as the probability of deriving the correct node from a span of tokens, conditioned on having chosen the correct action. To provide a concrete example, our dictionary lookup classifier predicts the correct AMR subgraph 67% of the time on the dev set. We therefore define the reliability of the DICT action as 0.67. In contrast to DICT, correctly labeling a node as IDENTITY, NAME, PERSON, and NONE have action reliability of 1.0, since there is no ambiguity in the node generation once one of those actions have been selected, and we are guaranteed to generate the correct node given the correct action. We can therefore construct a hierarchy of reliability (Figure 5) – all else being equal, we prefer to generate actions from higher in the hierarchy, as they are more likely to produce the correct subgraph. This hierarchy is useful in resolving ambiguity throughout our system. During the creation of training data for our classifier (Section 4.3) from our aligner, when two actions could both generate the aligned AMR node we prefer the more reliable one. In turn, in our aligner we bias alignments towards those which generating more reliable action sequences as training data (see Section 5). The primary benefit of this action-based NER++ approach is that we can reduce the usage of low reliability actions, like DICT. The approach taken in Flanigan et al. (2014) can be 986 Action # Tokens % Total NONE 41538 36.2 DICT 30027 26.1 IDENTITY 19034 16.6 VERB 11739 10.2 LEMMA 5029 4.5 NAME 4537 3.9 DATE 1418 1.1 PERSON 1336 1.1 VALUE 122 0.1 Table 1: Distribution of action types in the proxy section of the newswire section of the LDC2014T12 dataset, generated from automatically aligned data. Input token; word embedding Left+right token / bigram Token length indicator Token starts with “non” POS; Left+right POS / bigram Dependency parent token / POS Incoming dependency arc Bag of outgoing dependency arcs Number of outgoing dependency arcs Max Jaro-Winkler to any lemma in PropBank Output tag of the VERB action if applied Output tag of the DICT action if applied NER; Left+right NER / bigram Capitalization Incoming prep * or appos + parent has NER Token is pronoun Token is part of a coref chain Token pronoun and part of a coref chain Table 2: The features for the NER++ maxent classifier. thought of as equivalent to classifying every token as the DICT action. We analyze the empirical distribution of actions in our automatically aligned corpus in Table 1. The cumulative frequency of the non-DICT actions is striking: we can generate 74% of the tokens with high reliability (p ≥0.9) actions. In this light, it is unsurprising that our results demonstrate a large gain in recall on the test set. 4.3 Training the Action Classifier Given a set of AMR training data, in the form of (graph, sentence) pairs, we first induce alignments from the graph nodes to the sentence (see Section 5). Formally, for every node ni in the AMR graph, alignment gives us some token sj (at the jth index in the sentence) that we believe generated the node ni. Then, for each action type, we can ask whether or not that action type is able to take token sj and correctly generate ni. For concreteness, imagine the token sj is running, and the node ni has the title run-01. The two action types we find that are able to correctly generate this node are DICT and VERB. We choose the most reliable action type of those available (see Figure 5) to generate the observed node – in this case, VERB. In cases where an AMR subgraph is generated from multiple tokens, we assign the action label to each token which generates the subgraph. Each of these tokens are added to the training set; at test time, we collapse sequences of adjacent identical action labels, and apply the action once to the resulting token span. Inducing the most reliable action (according to the alignments) for every token in the training corpus provides a supervised training set for our action classifier, with some noise introduced by the automatically generated alignments. We then train a simple maxent classifier1 to make action decisions at each node. At test time, the classifier takes as input a pair ⟨i, S⟩, where i is the index of the token in the input sentence, and S is a sequence tokens representing the source sentence. It then uses the features in Table 2 to predict the actions to take at that node. 5 Automatic Alignment of Training Data AMR training data is in the form of bi-text, where we are given a set of (sentence, graph) pairs, with no explicit alignments between them. We would like to induce a mapping from each node in the AMR graph to the token it represents. It is perfectly possible for multiple nodes to align to the same token – this is the case with sailors, for instance. It is not possible, within our framework, to represent a single node being sourced from multiple tokens. Note that a subgraph can consist of many individual nodes; in cases where a subgraph should align to multiple tokens, we generate an alignment from the subgraph’s nodes to the associated tokens in the sentence. It is empirically very rare for a subgraph to have more nodes than the token span it should align to. There have been two previous attempts at producing automatic AMR alignments. The first was 1A sequence model was tried and showed no improvement over a simple maxent classifier. 987 published as a component of JAMR, and used a rule-based approach to perform alignments. This was shown to work well on the sample of 100 hand-labeled sentences used to develop the system. Pourdamghani et al. (2014) approached the alignment problem in the framework of the IBM alignment models. They rendered AMR graphs as text, and then used traditional machine translation alignment techniques to generate an alignment. We propose a novel alignment method, since our decomposition of the AMR node generation process into a set of actions provides an additional objective for the aligner to optimize, in addition to the accuracy of the alignment itself. We would like to produce the most reliable sequence of actions for the NER++ model to train from, where reliable is taken in the sense defined in Section 4.2. To give an example, a sequence of all DICT actions could generate any AMR graph, but is very low reliability. A sequence of all IDENTITY actions could only generate one set of nodes, but does it with absolute certainty. We formulate this objective as a Boolean LP problem. Let Q be a matrix in {0, 1}|N|×|S| of Boolean constrained variables, where N are the nodes in an AMR graph, and S are the tokens in the sentence. The meaning of Qi,j = 1 can be interpreted as node ni having being aligned to token sj. Furthermore, let V be a matrix T |N|×|S|, where T is the set of NER++ actions from Section 4. Each matrix element Vi,j is assigned the most reliable action which would generate node ni from token sj. We would like to maximize the probability of the actions collectively generating a perfect set of nodes. This can be formulated linearly by maximizing the log-likelihood of the actions. Let the function REL(l) be the reliability of action l (probability of generating intended node). Our objective can then be formulated as follows: max Q X i,j Qi,j [log(REL(Vi,j)) + αEi,j] (1) s.t. X j Qi,j = 1 ∀i (2) Qk,j + Ql,j ≤1 ∀k, l, j; nk ↮nl (3) where E is the Jaro-Winkler similarity between the title of the node i and the token j, α is a hyperparameter (set to 0.8 in our experiments), and the operator ↮denotes that two nodes in the AMR graph are both not adjacent and do not have the same title. The constraint (2), combined with the binary constraint on Q, ensures that every node in the graph is aligned to exactly one token in the source sentence. The constraint (3) ensures that only adjacent nodes or nodes that share a title can refer to the same token. The objective value penalizes alignments which map to the unreliable DICT tag, while rewarding alignments with high overlap between the title of the node and the token. Note that most incorrect alignments fall into the DICT class by default, as no other action could generate the correct AMR subgraph. Therefore, if there exists an alignment that would consume the token using another action, the optimization prefers that alignment. The Jaro-Winkler similarity term, in turn, serves as a tie-breaker between equally (un)reliable alignments. There are many packages which can solve this Boolean LP efficiently. We used Gurobi (Gurobi Optimization, 2015). Given a matrix Q that maximizes our objective, we can decode our solved alignment as follows: for each i, align ni to the j s.t. Qi,j = 1. By our constraints, exactly one such j must exist. 6 Related Work Prior work in AMR and related formalisms include Jones et al. (2012), and Flanigan et al. (2014). Jones et al. (2012), motivated by applications in Machine Translation, proposed a graphical semantic meaning representation that predates AMR, but is intimately related. They propose a hyper-edge replacement grammar (HRG) approach to parsing into and out of this graphical semantic form. Flanigan et al. (2014) forms the basis of the approach of this paper. Their system introduces the two-stage approach we use: they implement a rule-based alignment to learn a mapping from tokens to subgraphs, and train a variant of a maximum spanning tree parser adapted to graphs and with additional constraints for their relation identifications (SRL++) component. Wang et al. (2015) uses a transition based algorithm to transform dependency trees into AMR parses. They achieve 64/62/63 P/R/F1 with contributions roughly orthogonal to our own. Their transformation action set could be easily augmented by the robust subgraph generation we propose here, although we leave this to future work. Beyond the connection of our work with Flani988 gan et al. (2014), we note that the NER++ component of AMR encapsulates a number of lexical NLP tasks. These include named entity recognition (Nadeau and Sekine, 2007; Finkel et al., 2005), word sense disambiguation (Yarowsky, 1995; Banerjee and Pedersen, 2002), lemmatization, and a number of more domain specific tasks. For example, a full understanding of AMR requires normalizing temporal expressions (Verhagen et al., 2010; Str¨otgen and Gertz, 2010; Chang and Manning, 2012). In turn, the SRL++ facet of AMR takes many insights from semantic role labeling (Gildea and Jurafsky, 2002; Punyakanok et al., 2004; Srikumar, 2013; Das et al., 2014) to capture the relations between verbs and their arguments. In addition, many of the arcs in AMR have nearly syntactic interpretations (e.g., mod for adjective/adverb modification, op for compound noun expressions). These are similar to representations used in syntactic dependency parsing (de Marneffe and Manning, 2008; McDonald et al., 2005; Buchholz and Marsi, 2006). More generally, parsing to a semantic representation is has been explored in depth for when the representation is a logical form (Kate et al., 2005; Zettlemoyer and Collins, 2005; Liang et al., 2011). Recent work has applied semantic parsing techniques to representations beyond lambda calculus expressions. For example, work by Berant et al. (2014) parses text into a formal representation of a biological process. Hosseini et al. (2014) solves algebraic word problems by parsing them into a structured meaning representation. In contrast to these approaches, AMR attempts to capture open domain semantics over arbitrary text. Interlingua (Mitamura et al., 1991; Carbonell et al., 1999; Levin et al., 1998) are an important inspiration for decoupling the semantics of the AMR language from the surface form of the text being parsed; although, AMR has a self-admitted English bias. 7 Results We present improvements in end-to-end AMR parsing on two datasets using our NER++ component. Action type classifier accuracy on an automatically aligned corpus and alignment accuracy on a small hand-labeled corpus are also reported. Dataset System P R F1 2014T12 JAMR 67.1 53.2 59.3 Our System 66.6 58.3 62.2 2013E117 JAMR 66.9 52.9 59.1 Our System 65.9 59.0 62.3 Table 3: Results on two AMR datasets for JAMR and our NER++ embedded in the JAMR SRL++ component. Note that recall is consistently higher across both datasets, with only a small loss in precision. 7.1 End-to-end AMR Parsing We evaluate our NER++ component in the context of end-to-end AMR parsing on two corpora: the newswire section of LDC2014T12 and the split given in Flanigan et al. (2014) of LDC2013E117, both consisting primarily of newswire. We compare two systems: the JAMR parser (Flanigan et al., 2014),2 and the JAMR SRL++ component with our NER++ approach. AMR parsing accuracy is measured with a metric called smatch (Cai and Knight, 2013), which stands for “s(emantic) match.” The metric is the F1 of a best-match between triples implied by the target graph, and triples in the parsed graph – that is, the set of (parent, edge, child) triples in the graph. Our results are given in Table 3. We report much higher recall numbers on both datasets, with only small (≤1 point) loss in precision. This is natural considering our approach. A better NER++ system allows for more correct AMR subgraphs to be generated – improving recall – but does not in itself necessarily improve the accuracy of the SRL++ system it is integrated in. 7.2 Component Accuracy We evaluate our aligner on a small set of 100 handlabeled alignments, and evaluate our NER++ classifier on automatically generated alignments over the whole corpus, On a hand-annotated dataset of 100 AMR parses from the LDC2014T12 corpus,3 our aligner achieves an accuracy of 83.2. This is a measurement of the percentage of AMR nodes that are aligned to the correct token in their source sentence. Note that this is a different metric than the 2Available at https://github.com/jflanigan/ jamr. 3Our dataset is publicly available at http://nlp. stanford.edu/projects/amr 989 precision/recall of prior work on alignments, and is based on both a different alignment dataset and subtly different alignment annotation scheme. In particular, we require that every AMR node aligns to some token in the sentence, which forces the system to always align nodes, even when unsure. A standard semantics and annotation guideline for AMR alignment is left for future work; our accuracy should be considered only an informal metric. We find our informativeness-based alignment objective slightly improves end-to-end performance when compared to the rule-based approach of (Flanigan et al., 2014), improving F1 by roughly 1 point (64/59/61 P/R/F1 to 65/59/62 P/R/F1). On the automatic alignments over the LDC2014T12 corpus, our action classifier achieved a test accuracy of 0.841. The classifier’s most common class of mistakes are incorrect DICT classifications. It is reassuring that some of these errors can be recovered from by the na¨ıve dictionary lookup finding the correct mapping. The DICT action lookup table achieved an accuracy of 0.67. This is particularly impressive given that our model moves many of the difficult semantic tasks onto the DICT tag, and that this lookup does not make use of any learning beyond a simple count of observed span to subgraph mappings. 8 Conclusion We address a key challenge in AMR parsing: the task of generating subgraphs from lexical items in the sentence. We show that a simple classifier over actions which generate these subgraphs improves end-to-end recall for AMR parsing with only a small drop in precision, leading to an overall gain in F1. A clear direction of future work is improving the coverage of the defined actions. For example, a richer lemmatizer could shift the burden of lemmatizing unknown words into the AMR lemma semantics and away from the dictionary lookup component. We hope our decomposition provides a useful framework to guide future work in NER++ and AMR in general. Acknowledgments We thank the anonymous reviewers for their thoughtful feedback. Stanford University gratefully acknowledges the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government. References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. Proc. Linguistic Annotation Workshop. Satanjeev Banerjee and Ted Pedersen. 2002. An adapted Lesk algorithm for word sense disambiguation using wordnet. In Computational linguistics and intelligent text processing. Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Brad Huang, Christopher D Manning, Abby Vander Linden, Brittany Harding, and Peter Clark. 2014. Modeling biological processes for reading comprehension. In Proc. EMNLP. Sabine Buchholz and Erwin Marsi. 2006. CONLL-X shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 149–164. Association for Computational Linguistics. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In ACL (2), pages 748–752. Jaime G Carbonell, Teruko Mitamura, and Eric H Nyberg. 1999. The KANT perspective: A critique of pure transfer (and pure interlingua, pure statistics,...). Angel Chang and Chris Manning. 2012. SUTIME: a library for recognizing and normalizing time expressions. In Language Resources and Evaluation. Dipanjan Das, Desai Chen, Andr´e FT Martins, Nathan Schneider, and Noah A Smith. 2014. Framesemantic parsing. Computational Linguistics, 40(1):9–56. Donald Davidson. 1967. The logical form of action sentences. In Nicholas Rescher, editor, The Logic of Decision and Action, pages 81–120. University of Pittsburgh Press, Pittsburgh, PA. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation. 990 Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In ACL. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In ACL. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational linguistics, 28(3):245–288. Inc. Gurobi Optimization. 2015. Gurobi optimizer reference manual. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In EMNLP. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyperedge replacement grammars. In COLING, pages 1359–1376. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In AAAI, Pittsburgh, PA. Lori S Levin, Donna Gates, Alon Lavie, and Alex Waibel. 1998. An interlingua based on domain actions for machine translation of task-oriented dialogues. In ICSLP, volume 98, pages 1155–1158. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In ACL. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In ACL, Morristown, NJ, USA. Teruko Mitamura, Eric H Nyberg, and Jaime G Carbonell. 1991. An efficient interlingua translation system for multi-lingual document production. Proceedings of Machine Translation Summit III. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3–26. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71–106. Terence Parsons. 1990. Events in the Semantics of English: A study in subatomic semantics. MIT Press, Cambridge, MA. Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning english strings with abstract meaning representation graphs. In EMNLP. Vasin Punyakanok, Dan Roth, Wen-tau Yih, and Dav Zimak. 2004. Semantic role labeling via integer linear programming inference. In Proceedings of the 20th international conference on Computational Linguistics, page 1346. Association for Computational Linguistics. Vivek Srikumar. 2013. The semantics of role labeling. Ph.D. thesis, University of Illinois at UrbanaChampaign. Jannik Str¨otgen and Michael Gertz. 2010. Heideltime: High quality rule-based extraction and normalization of temporal expressions. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: TempEval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, Uppsala, Sweden. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015. A transition-based algorithm for amr parsing. In NAACL-HLT. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In ACL. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI. AUAI Press. 991
2015
95
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 992–1002, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Environment-Driven Lexicon Induction for High-Level Instructions Dipendra K. Misra∗ Kejia Tao∗ Percy Liang∗∗ Ashutosh Saxena∗ [email protected] [email protected] [email protected] [email protected] ∗Department of Computer Science, Cornell University ∗∗Department of Computer Science, Stanford University Abstract We focus on the task of interpreting complex natural language instructions to a robot, in which we must ground high-level commands such as microwave the cup to low-level actions such as grasping. Previous approaches that learn a lexicon during training have inadequate coverage at test time, and pure search strategies cannot handle the exponential search space. We propose a new hybrid approach that leverages the environment to induce new lexical entries at test time, even for new verbs. Our semantic parsing model jointly reasons about the text, logical forms, and environment over multi-stage instruction sequences. We introduce a new dataset and show that our approach is able to successfully ground new verbs such as distribute, mix, arrange to complex logical forms, each containing up to four predicates. 1 Introduction The task of mapping natural language instructions to actions for a robot has been gaining momentum in recent years (Artzi and Zettlemoyer, 2013; Tellex et al., 2011; Misra et al., 2014; Bollini et al., 2011; Guadarrama et al., 2013; Matuszek et al., 2012b; Fasola and Mataric, 2013). We are particularly interested in instructions containing verbs such as “microwave” denoting high-level concepts, which correspond to more than 10 lowlevel symbolic actions such as grasp. In this setting, it is common to find new verbs requiring new concepts at test time. For example, in Figure 1, suppose that we have never seen the verb “fill”. Can we impute the correct interpretation, and moreover seize the opportunity to learn what “fill” means in a way that generalizes to future instructions? Text: “get the cup, fill it with water and then microwave the cup” grasping cup3 ∧ near(robot1,cup3) in cup3,microwave ∧ state(microwave1,is-on) state cup3,water ∧ on(cup3,sink) Unseen verb “ fill ” is grounded at test time using environment. Figure 1: A lexicon learned on the training data cannot possibly cover all the verb-concept mappings needed at test time. Our algorithm learns the meaning of new verbs (e.g., “fill”) using the environment context. Previous work in semantic parsing handles lexical coverage in one of two ways. Kwiatkowski et al. (2010) induces a highly constrained CCG lexicon capable of mapping words to complex logical forms, but it would have to skip new words (which in Figure 1 would lead to microwaving an empty cup). Others (Berant and Liang, 2014) take a freer approach by performing a search over logical forms, which can handle new words, but the logical forms there are much simpler than the ones we consider. In this paper, we present an hybrid approach that uses a lexicon to represent complex concepts but also strongly leverages the environment to guide the search space. The environment can provide helpful cues in several ways: • Only a few environments are likely for a given scenario—e.g., the text is unlikely to ask the robot to microwave an empty cup or put books on the floor. • The logical form of one segment of text constrains that of the next segment—e.g., the text is unlikely to ask the robot to pick a cup and then put it back immediately in the same spot. We show that this environment context provides 992 𝑐1 𝑐2 z1 e1 e2 planner 𝑎1 simulator Shallow Parsing (Section 3.2) x z2 planner 𝑎2 𝑐𝑘 ⋯⋯ ⋯ Action Sequence: 𝑚𝑜𝑣𝑒𝑡𝑜𝑥𝑏𝑜𝑥1 ; 𝑔𝑟𝑎𝑠𝑝𝑥𝑏𝑜𝑥1 ; 𝑝𝑟𝑒𝑠𝑠𝑝𝑜𝑤𝑒𝑟_𝑏𝑢𝑡𝑡𝑜𝑛1 ; 𝑚𝑜𝑣𝑒𝑡𝑜𝑐𝑑2 ; 𝑔𝑟𝑎𝑠𝑝𝑐𝑑2 ; 𝑖𝑛𝑠𝑒𝑟𝑡(𝑐𝑑2, 𝑥𝑏𝑜𝑥1) ⋯ Environment: Logical Form z = (ℓ, 𝜉) Frame Node ℓ: 𝑝𝑢𝑡⇒[𝜆 𝑣. state v1,has-cd ∧near v1,v2 , 𝜉′] 𝜉: {𝑣1 →𝑥𝑏𝑜𝑥1; 𝑣2 →𝑟𝑜𝑏𝑜𝑡1} 𝜉′: old mapping 𝑥𝑏𝑜𝑥1 𝑠𝑛𝑎𝑐𝑘𝑡𝑎𝑏𝑙𝑒2 On Power-off 𝜙 𝜙 𝜙 ek zk planner 𝑎k Semantic Parsing Model (Section 5) zk−1 planner 𝑎k−1 simulator Text: “Turn on xbox. Take Far Cry Game CD and put in xbox. Throw out beer, coke and sketchy stuff in bowl.⋯” 𝜈: throw, 𝜔: [ beer, coke, sketchy stuff, bowl ] r: { in: sketchy stuff →bowl } Figure 2: Graphical model overview: we first deterministically shallow parse the text x into a control flow graph consisting of shallow structures {ci}. Given an initial environment e1, our semantic parsing model maps these frame nodes to logical forms {zi} representing the postconditions. From this, a planner and simulator generate the action sequences {ai} and resulting environments {ei}. a signal for inducing new lexical entries that map previously unseen verbs to novel concepts. In the example in Figure 1, the algorithm learns that microwaving an empty cup is unlikely and this suggests that the verb “fill” must map to actions that end up making the cup not empty. Another contribution of this paper is using postconditions as logical forms rather than actions, as in previous work (Artzi and Zettlemoyer, 2013; Misra et al., 2014). Postconditions not only reduce the search space of logical forms, but are also a more natural representation of verbs. We define a conditional random field (CRF) model over postconditions, and use a planner to convert postconditions into action sequences and a simulator to generate new environments. At test time, we use the lexicon induced from the training data, but also perform an environmentguided search over logical forms to induce new lexical entries on-the-fly. If the predicted action sequence uses a new lexical entry generated by the search, it is added to the lexicon, where it can be reused in subsequent test examples. We evaluate our algorithm on a new corpus containing text commands for a household robot. The two key findings of our experiments are: First, the environment and task context contain enough information to allow us to learn lexical entries for new verbs such as “distribute” and “mix” with complex semantics. Second, using both lexical entries generated by a test-time search and those from the lexicon induced by the training data outperforms the two individual approaches. This suggests that environment context can help alleviate the problem of having a limited lexicon for grounded language acquisition. 2 Problem Statement At training time, we are given a set of examples D = {(x(m), e(m), a(m), π(m))}M m=1, where x(m) is a text containing natural language instructions, e(m) is an initial environment, a(m) is a humanannotated sequence of actions, and π(m) specifies a monotonic alignment between segments of x(m) and segments of a(m). For example, given words x(m) = x1x2 and a(m) = a1a2a3, π(m) might specify that x1 aligns to a1a2 and x2 aligns to a3. At test time, given a sequence of textenvironment pairs as input {(x(n), e(n))}N n=1, we wish to generate a sequence of actions a(n) for each input pair. Note that our system is allowed to use information about one test example to improve performance on subsequent ones. We evaluate a system on its ability to recover a human-annotated sequence of actions. 3 Approach Overview Figure 2 shows our approach for mapping text x to actions a1:k given the initial environment e1. 3.1 Representation We use the following representation for the different variables in Figure 2. Environment. An environment ei is represented by a graph whose nodes are objects and edges represent spatial relations between these objects. We consider five basic spatial relations: near, grasping, on, in and below. Each object has an 993 instance ID (e.g., book9), a category name (e.g., chair, xbox), a set of properties such as graspable, pourable used for planning and a set of boolean states such as has-water, at-channel3, whose values can be changed by robot actions. The robot is also an object in the environment. For example, the objects xbox1, snacktable2, are two objects in e1 in Figure 2 with relation on between them. Postconditions. A postcondition is a conjunction of atoms or their negations. Each atom consists of either a spatial relation between two objects (e.g., on(book9, shelf3)) or a state and a value (e.g., state(cup4, has-water)). Given an environment e, the postcondition evaluates to true or false. Actions. Each action in an action sequence ai consists of an action name with a list of arguments (e.g., grasp(xbox1)). The action name is one of 15 values (grasp, moveto, wait, etc.), and each argument is either an object in the environment (e.g., xbox1), a spatial relation (e.g., in for keep(ramen2, in, kettle1), or a postcondition (e.g., for wait(state(kettle1, boiling))). Logical Forms. The logical form zi is a pair (ℓ, ξ) containing a lexical entry ℓand a mapping ξ. The lexical entry ℓcontains a parameterized postcondition such as λ⃗v.grasping(v1, v2) ∧¬near(v3, v2), and ξ maps the variables⃗v to objects in the environment. Applying the parameterized postcondition on ξ yields a postcondition; note that a postcondition can be represented by different logical forms. A lexical entry contains other information which are used for defining features, which is detailed in Section 4. Control Flow Graphs. Following previous work (Tellex et al., 2011; Misra et al., 2014), we convert the text x to a shallow representation. The particular representation we choose is a control flow graph, which encodes the sequential relation between atomic segments in the text. Figure 3 shows the control flow graph for an example text. In a control flow graph, each node is either a frame node or a conditional node. A frame node represents a single clause (e.g., “change the channel to a movie”) and has at most one successor node. Specifically, a frame node consists of a verb ν (e.g., arrange, collect), a set of object descriptions {ωi} which are the arguments of the verb (e.g., the guinness book, movie channel), and spatial relations r between the arguments (e.g., between, near). The object description ω is either an anaphoric reference (such as “it”) or a tuple containing the main noun, associated modifiers, and relative clauses. Text: “If any of the pots have food in them, then dump them out in the garbage can and then put them on the sink else keep it on the table.” ∃e category e,cup ∧state(e,food) 𝜈: dump 𝝎: [them, the garbage can] 𝑟: { in: them →garbage can } 𝜈: put 𝝎: [them, the sink] 𝑟: {𝑜n: them →the sink } 𝜈: keep 𝝎: [it, the table] 𝑟: {𝑜n: it →the table } Conditional node (𝑒𝑥𝑝𝑟) Frame Node 𝜈, 𝝎, 𝑟 𝜈: verb 𝝎: set of object description 𝜔 𝜔: (main noun or pronoun, modifiers) 𝑟: relationship between descriptions Figure 3: We deterministically parse text into a shallow structure called a control flow graph. A conditional node contains a logical postcondition with at most one existentially quantified variable (in contrast to a frame node, which contains natural language). For example, in Figure 3 the conditional node contains the expression corresponding to the text “if any of the pots has food” There are two types of conditional nodes: branching and temporal. A branching conditional node represents an “if” statement and has two successor nodes corresponding to whether the condition evaluates to true or false in the current environment. A temporal conditional node represents an “until” statement and waits until the condition is false in the environment. 3.2 Formal Overview Shallow Parsing. We deterministically convert the text x into its control flow graph G using a set of manual rules applied on its constituency parse tree from the Stanford parser (Klein and Manning, 2003). Conditionals in our dataset are simple and can be converted into postconditions directly using a few rules, unlike the action verbs (e.g., “fill”), which is the focus of this paper. The details of our shallow parsing procedure is described in the appendix. Given an environment e1, G is reduced to a single sequence of frame nodes c1, . . . , ck, by evaluating all the branch conditionals on e1. Semantic Parsing Model. For each frame node ci and given the current environment ei, the semantic parsing model (Section 5) places a distribution over logical forms zi. This logical form zi represents a postcondition on the environment after executing the instructions in ci. Planner and Simulator. Since our semantic representations involve postconditions but our model is based on the environment, we need to connect the two. We use planner and a simulator that to994 gether specify a deterministic mapping from the current environment ei and a logical form zi to a new environment ei+1. Specifically, the planner takes the current environment ei and a logical form zi and computes the action sequence ai = planner(ei, zi) for achieving the post condition represented by zi.1 The simulator takes the current environment ei and an action sequence ai and returns a new environment ei+1 = simulator(ei, ai). 4 Anchored Verb Lexicons Like many semantic parsers, we use a lexicon to map words to logical forms. Since the environment plays an central role in our approach, we propose an anchored verb lexicon, in which we store additional information about the environment in which lexical entries were previously used. We focus only on verbs since they have the most complex semantics; object references such as “cup” can be mapped easily, as described in Section 5. More formally, an anchored verb lexicon Λ contains lexical entries ℓof the following form: [ν ⇒ (λ⃗v.S, ξ)] where, ν is a verb, S is a postcondition with free variables ⃗v, and ξ is a mapping of these variables to objects. An example lexical entry is: [ pour ⇒(λv1v2v3.S, ξ)], where: S = grasping(v1, v2) ∧near(v1, v3) ∧¬state(v2, milk) ∧state(v3, milk) ξ = {v1 →robot1, v2 →cup1, v3 →bowl3} (anchoring) As Table 1 shows, a single verb will in general have multiple entries due to a combination of polysemy and the fact that language is higher-level than postconditions. Advantages of Postconditions. In contrast to previous work (Artzi and Zettlemoyer, 2013; Misra et al., 2014), we use postconditions instead of action sequence for two main reasons. First, postconditions generalize better. To illustrate this, consider the action sequence for the simple task of filling a cup with water. At the time of learning the lexicon, the action sequence might correspond to using a tap for filling the cup while at test time, the environment may not have a tap but instead have a pot with water. Thus, if the lexicon maps to action sequence, then it will not be applicable at test time whereas the postcondition state(z1, water) is valid in both cases. We thus shift the load of inferring environment-specific actions onto planners 1We use the symbolic planner of Rintanen (2012) which can perform complex planning. For example, to pick up a bottle that is blocked by a stack of books, the planner will first remove the books before grasping the bottle. In contrast, Artzi and Zettlemoyer (2013) use a simple search over implicit actions. Table 1: Some lexical entries for the verb “turn” Sentence Context Lexical entry [turn ⇒(λ⃗v.S, ξ)] “turn on the TV” state(v1, is-on) ∧near(v2, v1) ξ : v1 →tv1, v2 →robot1 “turn on the right state(v1, fire3) ∧near(v2, v1) back burner” ξ : v1 →stove1, v2 →robot1 “turn off the water” ¬state(v1, tap-on) ξ : v1 →sink1 “turn the television state(v1, channel6) ∧near(v1, v2) input to xbox” ξ : v1 →tv1, v2 →xbox1 and use postconditions for representation, which better captures the semantics of verbs. Second, because postconditions are higherlevel, the number of atoms needed to represent a verb is much less than the corresponding number of actions. For example, the text “microwave a cup”, maps to action sequence with 10–15 actions, the postcondition only has two atoms: in(cup2, microwave1) ∧ state(microwave, is-on). This makes searching for new logical forms more tractable. Advantages of Anchoring. Similar to the VEIL templates of Misra et al. (2014), the free variables ⃗v are associated with a mapping ξ to concrete objects. This is useful for resolving ellipsis. Suppose the following lexical entry was created at training time based on the text “throw the drinks in the trash bag”: [ℓ: throw ⇒λxyz.S(x, y, z)], where S = in(x, y) ∧¬grasping(z, x) ∧¬state(z, closed) ξ = {x →coke1, y →garbageBin1, z →robot1} Now consider a new text at test time “throw away the chips”, which does not explicitly mention where to throw the chips. Our semantic parsing algorithm (Section 5) will use the previous mapping y →garbabeBin1 to choose an object most similar to a garbage bin. 5 Semantic Parsing Model Given a sequence of frame nodes c1:k and an initial environment e1, our semantic parsing model defines a joint distribution over logical forms z1:k. Specifically, we define a conditional random field (CRF) over z1:k, as shown in Figure 2: pθ(z1:k | c1:k, e1)∝exp k X i=1 φ(ci, zi−1, zi, ei) · θ ! , (1) where φ(ci, zi−1, zi, ei) is the feature vector and θ is the weight vector. Note that the environments e1:k are a deterministic function of the logical forms z1:k through the recurrence ei+1 = simulator(ei, planner(ei, zi)), which couples the different time steps. 995 Features. The feature vector φ(ci, zi−1, zi, ei) contains 16 features which capture the dependencies between text, logical forms, and environment. Recall that zi = ([ν ⇒(λ⃗v.S, ξ)], ξi), where ξ is the environment in which the lexical entry was created and ξi is the current environment. Let fi = (λ⃗v.S)(ξi) be the current postcondition. Here we briefly describe the important features (see the supplemental material for the full list): • Language and logical form: The logical form zi should generally reference objects mentioned in the text. Assume we have computed a correlation ρ(ω, o) between each object description ω and object o, whose construction is described later. We then define two features: precision correlation, which encourages zi to only use objects referred to in ci; and recall correlation, which encourages zi to use all the objects referred to in ci. • Logical form: The postcondition fi should be based on previously seen environments. For example, microwaving an empty cup and grasping a couch are unlikely postconditions. We define features corresponding to the average probability (based on the training data) of all conjunctions of at most two atoms in the postcondition (e.g., grasping(robot, cup)}). We do the same with their abstract versions ({grasping(v1, v2)}). In addition, we build the same set of four probability tables conditioned on verbs in the training data. For example, the abstract postcondition state(v1, water) has a higher probability conditioned on the verb “fill”. This gives us a total of 8 features of this type. • Logical form and environment: Recall that anchoring helps us in dealing with ellipsis and noise. We add a feature based on the average correlation between the objects of the new mapping ξi with the corresponding objects in the anchored mapping ξ. The other features are based on the relationship between object descriptions, similarity between ξ and ξi and transition probabilities between logical forms zi−1 and zi. These probabilities are also learned from training data. Mapping Object Descriptions. Our features rely on a mapping from object descriptions ω (e.g., “the red shiny cup”) to objects o (e.g., cup8), which has been addressed in many recent works (Matuszek et al., 2012a; Guadarrama et al., 2014; Fasola and Matari’c, 2014). One key idea is: instead of computing rigid lexical entries such as cup →cup1, we use a continuous correlation score ρ(ω, o) ∈[0, 1] that measures how well ω describes o. This flexibility allows the algorithm to use objects not explicitly mentioned in text. Given “get me a tank of water”, we might choose an approximate vessel (e.g., cup2). Given an object description ω, an object o, and a set of previously seen objects (used for anaphoric resolution), we define the correlation ρ(ω, o) using the following approach: • If ω is a pronoun, ρ(ω, o) is the ratio of the position of the last reference of o to the length of the action sequence computed so far, thus preferring recent objects. • Otherwise, we compute the correlation using various sources: the object’s category; the object’s state for handling metonymy (e.g., the description “coffee” correlates well with the object mug1 if mug1 contains coffee— state(mug1, has-coffee) is true), WordNet (Fellbaum, 1998) for dealing synonymy and hyponymy; and word alignments between the objects and text from Giza++ (Och and Ney, 2003) to learn domain-specific references (e.g., “Guinness book” refers to book1, not book2). More details can be found in the supplemental material. 6 Lexicon Induction from Training Data In order to map text to logical forms, we first induce an initial anchored lexicon Λ from the training data {(x(m), e(m), a(m), π(m))}M m=1. At test time, we add new lexical entries (Section 7) to Λ. Recall that shallow parsing x(m) yields a list of frame nodes c1:k. For each frame node ci and its aligned action sequence ai, we take the conjunction of all the atoms (and their negations) which are false in the current one ei but true in the next environment ei+1. We parametrize this conjunction by replacing each object with a variable, yielding a postcondition S parametrized by free variables ⃗v and the mapping ξ from ⃗v to objects in ei. We then add the lexical entry [verb(ci) ⇒ (λ⃗v.S, ξ)] to Λ. Instantiating Lexical Entries. At test time, for a given clause ci and environment ei, we generate set of logical forms zi = (ℓi, ξi). To do this, we consider the lexical entries in Λ with the same verb as ci. For each such lexical entry ℓi, we can map its free variables ⃗v to objects in ei in an exponential number of ways. Therefore, for each ℓi we only consider the logical form (ℓi, ξi) where the mapping ξi obtains the highest score under the current 996 model: ξi = arg maxξ′ φ(ci, zi−1, (ℓi, ξ′), ei) · θ. For the feature vector φ that we consider, this approximately translates to solving an integer quadratic program with variables [yij] ∈{0, 1}, where yij = 1 only if vi maps to object j. 7 Environment-Driven Lexicon Induction at Test Time Unfortunately, we cannot expect the initial lexicon Λ induced from the training set to have full coverage of the required postconditions. Even after using 90% of the data for training, we encountered 17% new postconditions on the remaining 10%. We therefore propose generating new lexical entries at test time and adding them to Λ. Formally, for a given environment ei and frame node ci, we want to generate likely logical forms. Although the space of all possible logical forms is very large, the environment constrains the possible interpretations. We first compute the set of atoms that are false in ei and that only contain objects o that are “referred” to by either ci or ci−1, where “refers” means that there exists some argument ω in ci for which o ∈ arg maxo′ ρ(ω, o′). For example, if ci corresponds to the text “distribute pillows among the couches”, we consider the atom on(pillow1, armchair1) but not on(pillow1, snacktable2) since the object armchair1 has the highest correlation to the description “couches”. Next, for each atom, we convert it into a logical form z = (ℓ, ξ) by replacing each object with a variable. While this generalization gives us a mapping ξ, we create a lexical entry ℓi = [ν ⇒ (λ⃗v.S, ∅)] without it, where S is the parameterized atom. Note that the anchored mapping is empty, representing the fact that this lexical entry was unseen during training time. For example, the atom state(tv1, mute) would be converted to the logical form (ℓ, ξ), where ℓ= [verb(ci) ⇒ (λv.state(v, mute), ∅] and ξ = {v →tv1}. We do not generalize state names (e.g., mute) because they generally are part of the meaning of the verb. The score φ(ci, zi−1, zi, ei) · θ is computed for the logical form zi produced by each postcondition. We then take the conjunction of every pair of postconditions corresponding to the 200 highest-scoring logical forms. This gives us new set of postconditions on which we repeat the generalization-scoring-conjunction cycle. We keep doing this while the scores of the new logical forms is increasing or while there are logical forms remaining. Train Time Anchored Lexicon 𝚲(Sec 6) ℓ= [𝑣𝑒𝑟𝑏⇒(𝜆 𝑣. 𝑆, 𝜉)] such that 𝑣𝑒𝑟𝑏= 𝑣(𝑐𝑖) Test Time Search for Logical Forms (Sec 7) Set of Logical Forms for 𝒄𝒊−𝟏𝒄𝒊, 𝒆𝒊, 𝒛𝒊−𝟏 𝑧𝑖= (ℓ, 𝜉𝑖) 𝜉𝑖is the new assignment 𝑧𝑖= ℓ, 𝜉𝑖 where ℓ= [𝑣𝑒𝑟𝑏⇒(𝜆 𝑣. 𝑆, ∅)] is a test time lexical entry Figure 4: Logical forms for a given clause ci, environment ei, and previous logical form zi−1 are generated from both a lexicon induced from training data and a test-time search procedure based on the environment. If a logical form z = ([ν ⇒(λ⃗v.S, ∅)], ξ) is used by the predicted action sequence, we add the lexical entry [ν ⇒(λ⃗v.S, ξ)] to the lexicon Λ. This is different to other lexicon induction procedures such as GENLEX (Zettlemoyer and Collins, 2007) which are done at training time only and require more supervision. Moreover, GENLEX does not use the environment context in creating new lexical entries and thus is not appropriate at test time, since it would vastly overgenerate lexical entries compared to our approach. For us, the environment thus provides implicit supervision for lexicon induction. 8 Inference and Parameter Estimation Inference. Given a text x (which is converted to c1:k via Section 3.2) and an initial environment e1, we wish to predict an action sequence a based on pθ(a1:k | c1:k, e1), which marginalizes over all logical forms z1:k (see Figure 2). To enumerate possible logical forms, semantic parsers typically lean heavily on a lexicon (Artzi and Zettlemoyer, 2013), leading to high precision but lower recall, or search more aggressively (Berant et al., 2013), leading to higher recall but lower precision. We adopt the following hybrid approach: Given ei, ci−1, ci and zi−1, we use both the lexical entries in Λ as explained in Section 6 and the search procedure in Section 7 to generate the set of possible logical forms for zi (see Figure 4). We use beam search, keeping only the highest-scoring logical form with satisfiable postconditions for each i ∈{1, . . . , k} and resulting action sequence a1:i. Parameter Estimation. We split 10% of our training data into a separate tuning set (the 90% was used to infer the lexicon). On each example in this set, we extracted the full sequence of logical forms z1:k from the action sequence a1:k based on Section 6. For efficiency, we used an objective 997 similar to pseudolikelihood to estimate the parameters θ. Specifically, we maximize the average loglikelihood over each adjacent pair of logical forms under ˜pθ: ˜pθ(zi | zi−1, ci, ei) ∝exp(φ(ci, zi−1, zi, ei)⊤θ). (2) The weights were initialized to 0. We performed 300 iterations over the validation set with a learning rate of 0.005 N . 9 Dataset and Experiments 9.1 Dataset We collected a dataset of 500 examples from 62 people using a crowdsourcing system similar to Misra et al. (2014). We consider two different 3D scenarios: a kitchen and a living room, each containing an average of 40 objects. Both of these scenarios have 10 environments consisting of different sets of objects in different configurations. We define 10 high-level objectives, 5 per scenario, such as clean the room, make coffee, prepare room for movie night, etc. One group of users wrote natural language commands to achieve the high-level objectives. Another group controlled a virtual robot to accomplish the commands given by the first group. The dataset contains considerable variety, consisting of 148 different verbs, an average of 48.7 words per text, and an average of 21.5 actions per action sequence. Users make spelling and grammar errors in addition to occasionally taking random actions not relevant to the text. The supplementary material contains more details. We filtered out 31 examples containing fewer than two action sequences. Of the remaining examples, 378 were used for training and 91 were used for test. Our algorithm is tested on four new environments (two from each scenario). 9.2 Experiments and Results Evaluation Metrics. We consider two metrics, IED and END, which measure accuracy based on the action sequence and environment, respectively. Specifically, the IED metric (Misra et al., 2014) is the edit distance between predicted and true action sequence. The END metric is the Jaccard index of sets A and B, where A is the set of atoms (e.g., on(cup1,table1)) whose truth value changed due to simulating the predicted action sequence, and B is that of the true action sequence. Baselines. We compare our algorithm with the following baselines: Table 3: Results on the metrics and baselines described in section 9.2. The numbers are normalized to 100 with larger values being better. Algorithm IED END Chance 0.3 0.5 Manually Defined Templates 2.5 1.8 UBL- Best Parse (Kwiatkowski et al., 2010) 5.3 6.9 VEIL (Misra et al., 2014) 14.8 20.7 Model with only train-time lexicon induction 20.8 26.8 Model with only test-time lexicon induction 21.9 25.9 Full Model 22.3 28.8 1. Chance: Randomly selects a logical form for every frame node from the set of logical forms generated by generalizing all possible postconditions that do not hold in the current environment. These postconditions could contain up to 93 atoms. 2. Manually Defined Templates: Defines a set of postcondition templates for verbs similar to Guadarrama (2013). 3. UBL-Best Parse (Kwiatkowski et al., 2010): UBL algorithm trained on text aligned with postconditions and a noun-phrase seed lexicon. The planner uses the highest scoring postcondition given by UBL to infer the action sequence. 4. VEIL (Misra et al., 2014): Uses action sequences as logical forms and does not generate lexical entries at test time. We also consider two variations of our model: (i) using only lexical entries induced using the training data, and (ii) using only the logical forms induced at test-time by the search procedure. The results are presented in Table 3. We observe that our full model outperforms the baseline and the two pure search- and lexicon-based variations of our model. We further observe that adding the search procedure (Section 7) improved the accuracy by 1.5% on IED and 2% on END. The logical forms generated by the search were able to successfully map 48% of the new verbs. Table 2 shows new verbs and concepts that the algorithm was able to induce at test time. The algorithm was able to correctly learn the lexical entries for the verbs “distribute” and “mix”, while the ones for verbs “change” and “boil” were only partly correct. The postconditions in Table 2 are not structurally isomorphic to previously-seen logical forms; hence they could not have been handled by using synonyms or factored lexicons (Kwiatkowski et al., 2011). The poor performance of UBL was because the best logical form often produced an unsatisfiable postcondition. This can be remedied by joint modeling with the environ998 Table 2: New verbs and concepts induced at test time (Section 7). Text Postcondition represented by the learned logical form # Log. forms explored “mix it with ice cream and syrup”state(cup2, ice-cream1) ∧state(cup2, vanilla) 15 “distribute among the couches” ∧j∈{1,3}on(pillowj, loveseat1) ∧on(pillowi+1, armchairi+1)386 “boil it on the stove” state(stove, stovefire1) ∧state(kettle, water) 109 “change the channel to a movie” state(tv1, channel4) ∧on(book1, loveseat1) 98 ment. The VEIL baseline used actions for representation and does not generalize as well as the postconditions in our logical forms. It is also instructive to examine the alternate postconditions that the search procedure considers. For the first example in Table 2, the following postcondition was considered by not selected: grasping(robot, icecream2)∧grasping(robot, syrup1) While this postcondition uses all the objects described in the text, the environment-based features suggest it makes little sense for the task to end with the robot eternally grasping objects. For the second example, alternate postconditions considered included: 1. on(pillow1, pillow2) ∧on(pillow3, pillow4) 2. ∧4 j=1 on(pillowj, loveseat1) 3. ∧3 j=1 near(robot1, armchairj) The algorithm did not choose options 1 or 3 since the environment-based features recognizes these as unlikely configurations. Option 2 was ruled out since the recall correlation feature realizes that not all the couches are mentioned in the postcondition. To test how much features on the environment help, we removed all such features from our full model. We found that the accuracy fell to 16.0% on the IED metric and 16.6% on the END metric, showing that the environment is crucial. In this work, we relied on a simple deterministic shallow parsing step. We found that shallow parsing was able to correctly process the text in only 46% of the test examples, suggesting that improving this initial component or at least modeling the uncertainty there would be beneficial. 10 Related Work Our work uses semantic parsing to map natural language instructions to actions via novel concepts, which brings together several themes: actions, semantic parsing, novel concepts, and robotics. Mapping Text to Actions. Several works (Branavan et al., 2009; Branavan et al., 2010; Vogel and Jurafsky, 2010) use reinforcement learning to directly map to text to actions, and do not even require an explicit model of the environment. However, they can only handle simple actions, whereas our planner and simulator allows us to work with postconditions, and thus tackle high-level instructions. Branavan et al. (2012) extract precondition relations from text, learn to map text to subgoals (postconditions) for a planner. However, their postconditions are atomic, whereas ours are complex conjunctions. Other works (Chen and Mooney, 2011; Kim and Mooney, 2012; Kollar et al., 2010; Fasola and Mataric, 2013) have focused only on navigational verbs and spatial relations, but do not handle highlevel verbs. Artzi and Zettlemoyer (2013) also fall into the above category and offer a more compositional treatment. They focus on how words compose; we focus on unraveling single words. The broader problem of grounded language acquisition, involving connecting words to aspects of a situated context has been heavily studied (Duvallet et al., 2014; Yu and Siskind, 2013; Chu et al., 2013; Chen and Mooney, 2008; Mooney, 2008; Fleischman and Roy, 2005; Liang et al., 2009). Semantic Parsing. In semantic parsing, much work has leveraged CCG (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010). One challenge behind lexically-heavy approaches is ensuring adequate lexical coverage. Kwiatkowski et al. (2011) enhanced generalization by factoring a lexical entry into a template plus a lexeme, but the rigidity of the template remains. This is satisfactory when words map to one (or two) predicates, which is the case in most existing semantic parsing tasks. For example, in Artzi and Zettlemoyer (2013), verbs are associated with single predicates (“move” to move, “walk” to walk, etc.) In our setting, verbs contain multi-predicate postconditions, for which these techniques would not be suitable. As annotated logical forms for training semantic parsers are expensive to obtain, several works (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; Kwiatkowski et al., 2013) have developed methods to learn from weaker supervision, and as in our work, use the execution of the logical forms to guide the search. Our supervision is even weaker in that we are able to learn at test time 999 from partial environment constraints. Grounding to Novel Concepts. Guadarrama et al. (2014) map open vocabulary text to objects in an image using a large database. Matuszek et al. (2012a) create new predicates for every new adjective at test time. Others (Kirk et al., 2014) ask users for clarification. In contrast, we neither have access to large databases for this problem, nor do we do create new predicates or use explicit supervision at test time. Robotic Applications. Our motivation behind this work is to build robotic systems capable of taking commands from users. Other works in this area have considered mapping text to a variety of manipulation actions (Sung et al., 2015). Levine et al. (2015) and Lenz et al. (2015) focus on specific manipulation actions. In order to build a representation of the environment, Ren et al. (2012) and Wu et al. (2014) present vision algorithms but only output symbolic labels, which could act as inputs to our system. In future work, we also plan to integrate our work with RoboBrain (Saxena et al., 2014) to leverage these existing systems for building a robotic system capable of working with physical world data. 11 Conclusion We have presented an algorithm for mapping text to actions that induces lexical entries at test time using the environment. Our algorithm couples the lexicon extracted from training data with a testtime search that uses the environment to reduce the space of logical forms. Our results suggest that using the environment to provide lexical coverage of high-level concepts is a promising avenue for further research. Acknowledgements. This research was supported by the ONR (award N00014-14-1-0156), a Sloan Research Fellowship to the third author, and a Microsoft Research Faculty Fellowship and NSF Career Award to the fourth author. We thank Aditya Jami and Jaeyong Sung for useful discussions. We also thank Jiaqi Su for her help with data collection and all the people who participated in the user study. Reproducibility. Code, data, and experiments for this paper are available on the CodaLab platform at https://www.codalab.org/worksheets/ 0x7f9151ec074f4f589e4d4786db7bb6de/. Demos can be found at http://tellmedave.com. Appendix: Parsing Text into Control Flow Graph. We first decompose the text x into its control flow graph G using a simple set of rules: • The parse tree of x is generated using the Stanford parser (Klein and Manning, 2003) and a frame node is created for each non-auxiliary verb node in the tree. • Conditional nodes are discovered by looking for the keywords until, if, after, when. The associated subtree is then parsed deterministically using a set of a rules. For example, a rule parses “for x minutes” to for(digit:x,unit:minutes). We found that all conditionals can be interpreted against the initial environment e1, since our world is fullyobservable, deterministic, and the user giving the command has full view of the world. • To find objects, we look for anaphoric terminal nodes or nominals whose parent is not a nominal or which have a PP sibling. These are processed into object descriptions ω. • Object descriptions ω are attached to the frame node, whose verb is nearest in the parse tree to the main noun of ω. • Nodes corresponding to {IN,TO,CC,“,”} are added as the relation between the corresponding argument objects. • If there is a conjunction between two objects in a frame node and if these objects have the same relation to other objects, then we split the frame node into two sequential frame nodes around these objects. For example, a frame node corresponding to the text segment “take the cup and bowl from table” is split into two frame nodes corresponding to “take the cup from table” and “take bowl from table”. • A temporal edge is added between successive frame nodes in the same branch of a condition. A temporal edge is added between a conditional node and head of the true and false branches of the condition. The end of all branches in a sentence are joined to the starting node of the successive sentence. References Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL), 1:49–62. J. Berant and P. Liang. 2014. Semantic parsing via paraphrasing. In Association for Computational Linguistics (ACL). J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). 1000 M. Bollini, J. Barry, and D. Rus. 2011. Bakebot: Baking cookies with the PR2. In The PR2 Workshop, IROS. S. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACLIJCNLP), pages 82–90. S. Branavan, L. Zettlemoyer, and R. Barzilay. 2010. Reading between the lines: Learning to map highlevel instructions to commands. In Association for Computational Linguistics (ACL), pages 1268– 1277. S. Branavan, N. Kushman, T. Lei, and R. Barzilay. 2012. Learning high-level planning from text. In Association for Computational Linguistics (ACL), pages 126–135. D. L. Chen and R. J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In International Conference on Machine Learning (ICML), pages 128–135. D. L. Chen and R. J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Association for the Advancement of Artificial Intelligence (AAAI), pages 859–865. V. Chu, I. McMahon, L. Riano, C. McDonald, Q. He, J. Perez-Tejada, M. Arrigo, N. Fitter, J. Nappo, T. Darrell, et al. 2013. Using robotic exploratory procedures to learn the meaning of haptic adjectives. In International Conference on Intelligent Robots and Systems (IROS). J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL), pages 18–27. F. Duvallet, M. R. Walter, T. Howard, S. Hemachandra, J. Oh, S. Teller, N. Roy, and A. Stentz. 2014. Inferring maps and behaviors from natural language instructions. In International Symposium on Experimental Robotics (ISER). J. Fasola and M. Mataric. 2013. Using semantic fields to model dynamic spatial relations in a robot architecture for natural language instruction of service robots. In International Conference on Intelligent Robots and Systems (IROS). J. Fasola and M. J. Matari’c. 2014. Interpreting instruction sequences in spatial language discourse with pragmatics towards natural human-robot interaction. In International Conference on Robotics and Automation (ICRA), pages 6667–6672. C. Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. M. Fleischman and D. Roy. 2005. Intentional context in situated natural language learning. In Computational Natural Language Learning (CoNLL), pages 104–111. S. Guadarrama, L. Riano, D. Golland, D. Gouhring, Y. Jia, D. Klein, P. Abbeel, and T. Darrell. 2013. Grounding spatial relations for human-robot interaction. In International Conference on Intelligent Robots and Systems (IROS). S. Guadarrama, E. Rodner, K. Saenko, N. Zhang, R. Farrell, J. Donahue, and T. Darrell. 2014. Openvocabulary object retrieval. In Robotics: Science and Systems (RSS). J. Kim and R. Mooney. 2012. Unsupervised PCFG induction for grounded language learning with highly ambiguous supervision. In Computational Natural Language Learning (CoNLL), pages 433–444. N. H. Kirk, D. Nyga, and M. Beetz. 2014. Controlled natural languages for language generation in artificial cognition. In International Conference on Robotics and Automation (ICRA), pages 6667–6672. D. Klein and C. Manning. 2003. Accurate unlexicalized parsing. In Association for Computational Linguistics (ACL), pages 423–430. T. Kollar, S. Tellex, D. Roy, and N. Roy. 2010. Grounding verbs of motion in natural language commands to robots. In International Symposium on Experimental Robotics (ISER). T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223–1233. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Empirical Methods in Natural Language Processing (EMNLP), pages 1512–1523. T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing (EMNLP). I. Lenz, R. Knepper, and A. Saxena. 2015. Deepmpc: Learning deep latent features for model predictive control. In Robotics Science and Systems (RSS). S. Levine, C. Finn, T. Darrell, and P. Abbeel. 2015. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702. P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 91–99. 1001 P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590–599. C. Matuszek, N. FitzGerald, L. Zettlemoyer, L. Bo, and D. Fox. 2012a. A joint model of language and perception for grounded attribute learning. In International Conference on Machine Learning (ICML), pages 1671–1678. C. Matuszek, E. Herbst, L. Zettlemoyer, and D. Fox. 2012b. Learning to parse natural language commands to a robot control system. In International Symposium on Experimental Robotics (ISER). D. Misra, J. Sung, K. Lee, and A. Saxena. 2014. Tell Me Dave: Context-sensitive grounding of natural language to mobile manipulation instructions. In Robotics: Science and Systems (RSS). R. Mooney. 2008. Learning to connect language and perception. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1598–1601. F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29:19–51. X. Ren, L. Bo, and D. Fox. 2012. Rgb-(d) scene labeling: Features and algorithms. In Computer Vision and Pattern Recognition (CVPR), pages 2759–2766. J. Rintanen. 2012. Planning as satisfiability: Heuristics. Artificial Intelligence, 193. A. Saxena, A. Jain, O. Sener, A. Jami, D. K. Misra, and H. S. Koppula. 2014. Robobrain: Largescale knowledge engine for robots. arXiv preprint arXiv:1412.0691. J. Sung, S. H. Jin, and A. Saxena. 2015. Robobarista: Object part based transfer of manipulation trajectories from crowd-sourcing in 3d pointclouds. arXiv preprint arXiv:1504.03071. S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Association for the Advancement of Artificial Intelligence (AAAI). A. Vogel and D. Jurafsky. 2010. Learning to follow navigational directions. In Association for Computational Linguistics (ACL), pages 806–814. C. Wu, I. Lenz, and A. Saxena. 2014. Hierarchical semantic labeling for task-relevant RGB-D perception. In Robotics: Science and Systems (RSS). H. Yu and J. M. Siskind. 2013. Grounded language learning from video described with sentences. In Association for Computational Linguistics (ACL), pages 53–63. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658– 666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678–687. 1002
2015
96
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1003–1013, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Structural Representations for Learning Relations between Pairs of Texts Simone Filice and Giovanni Da San Martino and Alessandro Moschitti ALT, Qatar Computing Research Institute, Hamad Bin Khalifa University {sfilice,gmartino,amoschitti}@qf.org.qa Abstract This paper studies the use of structural representations for learning relations between pairs of short texts (e.g., sentences or paragraphs) of the kind: the second text answers to, or conveys exactly the same information of, or is implied by, the first text. Engineering effective features that can capture syntactic and semantic relations between the constituents composing the target text pairs is rather complex. Thus, we define syntactic and semantic structures representing the text pairs and then apply graph and tree kernels to them for automatically engineering features in Support Vector Machines. We carry out an extensive comparative analysis of stateof-the-art models for this type of relational learning. Our findings allow for achieving the highest accuracy in two different and important related tasks, i.e., Paraphrasing Identification and Textual Entailment Recognition. 1 Introduction Advanced NLP systems, e.g., IBM Watson system (Ferrucci et al., 2010), are the result of effective use of syntactic/semantic information along with relational learning (RL) methods. This research area is rather vast including, extraction of syntactic relations, e.g., (Nastase et al., 2013), predicate relations, e.g., Semantic Role Labeling (Carreras and M`arquez, 2005) or FrameNet parsing (Gildea and Jurafsky, 2002) and relation extraction between named entities, e.g., (Mintz et al., 2009). Although extremely interesting, the above methods target relations only between text constituents whereas the final goal of an intelligent system would be to interpret the semantics of larger pieces of text, e.g., sentences or paragraphs. This line of research relates to three broad fields, namely, Question Answering (QA) (Voorhees and Tice, 1999), Paraphrasing Identification (PI) (Dolan et al., 2004) and Recognition of Textual Entailments (RTE) (Giampiccolo et al., 2007). More generally, RL from text can be denied as follows: given two text fragments, the main goal is to derive relations between them, e.g., either if the second fragment answers the question, or conveys exactly the same information or is implied by the first text fragment. For example, the following two sentences: - License revenue slid 21 percent, however, to $107.6 million. - License sales, a key measure of demand, fell 21 percent to $107.6 million. express exactly the same meaning, whereas the next one: - She was transferred again to Navy when the American Civil War began, 1861. implies: - The American Civil War started in 1861. Automatic learning a model for deriving the relations above is rather complex as any of the text constituents, e.g., License revenue, a key measure of demand, in the two sentences plays an important role. Therefore, a suitable approach should exploit representations that can structure the two sentences and put their constituents in relation. Since the dependencies between constituents can be an exponential number and representing structures in learning algorithms is rather challenging, automatic feature engineering through kernel methods (Shawe-Taylor and Cristianini, 2004; Moschitti, 2006) can be a promising direction. In particular, in (Zanzotto and Moschitti, 2006), we represented the two evaluating sentences for the RTE task with syntactic structures and then applied tree kernels to them. The resulting system was very accurate but, unfortunately, it could not scale to large datasets as it is based on a compu1003 tationally exponential algorithm. This prevents its application to PI tasks, which typically require a large dataset to train the related systems. In this paper, we carry out an extensive experimentation using different kernels based on trees and graphs and their combinations with the aim of assessing the best model for relation learning between two entire sentences (or even paragraphs). More in detail, (i) we design many models for RL combining state-of-the-art tree kernels and graph kernels and apply them to innovative computational structures. These innovative combinations use for the fist time semantic/syntactic tree kernels and graph kernels for the tackled tasks. (ii) Our kernels provide effective and efficient solutions, which solve the previous scalability problem and, at the same time, exceed the state of the art on both RTE and PI. Finally, our study suggests research directions for designing effective graph kernels for RL. 2 Related Work In this paper, we apply kernel methods, which enable an efficient comparison of structures in huge, possibly infinite, feature spaces. While for trees, a comparison using all possible subtrees is possible, designing kernel functions for graphs with such property is an NP-Hard problem (i.e., it shows the same complexity of the graph isomorphism problem) (Gartner et al., 2003). Thus most kernels for graphs only associate specific types of substructures with features, such as paths (Borgwardt and Kriegel, 2005; Heinonen et al., 2012), walks (Kashima et al., 2003; Vishwanathan et al., 2006) and tree structures (Cilia and Moschitti, 2007; Mah´e and Vert, 2008; Shervashidze et al., 2011; Da San Martino et al., 2012). We exploit structural kernels for PI, whose task is to evaluate whether a given pair of sentences is in the paraphrase class or not, (see for example (Dolan et al., 2004)). Paraphrases can be seen as a restatement of a text in another form that preserves the original meaning. This task has a primary importance in many other NLP and IR tasks such as Machine Translation, Plagiarism Detection and QA. Several approaches have been proposed, e.g., (Socher et al., 2011) apply a recursive auto encoder with dynamic pooling, and (Madnani et al., 2012) use eight machine translation metrics to achieve the state of the art. To our knowledge no previous model based on kernel methods has been applied before: with such methods, we outperform the state of the art in PI. A description of RTE can be found in (Giampiccolo et al., 2007): it is defined as a directional relation extraction between two text fragments, called text and hypothesis. The implication is supposed to be detectable only based on the text content. Its applications are in QA, Information Extraction, Summarization and Machine translation. One of the most performing approaches of RTE 3 was (Iftene and Balahur-Dobrescu, 2007), which largely relies on external resources (i.e., WordNet, Wikipedia, acronyms dictionaries) and a base of knowledge developed ad hoc for the dataset. In (Zanzotto and Moschitti, 2006), we designed an interesting but computationally expensive model using simple syntactic tree kernels. In this paper, we develop models that do not use external resources but, at the same time, are efficient and approach the state of the art in RTE. 3 Structural kernels Kernel Machines carry out learning and classification by only relying on the inner product between instances. This can be efficiently and implicitly computed by kernel functions by exploiting the following dual formulation of the model (hyperplane): P i=1..l yiαiφ(oi) · φ(o) + b = 0, where yi are the example labels, αi the support vector coefficients, oi and o are two objects, φ is a mapping from the objects to feature vectors ⃗xi and φ(oi) · φ(o) = K(oi, o) is the kernel function implicitly defining such mapping. In case of structural kernels, K maps objects in substructures, thus determining their size and shape. Given two structures S1 and S2, our general definition of structural kernels is the following: K(S1, S2) = X s1⊆S1,s2⊆S2,si∈S kiso(s1, s2), (1) where si are substructures of Si, S is the set of admissible substructures, and kiso determines if the two substructures are isomorphic, i.e., it outputs 1 if s1 and s2 are isomorphic and 0 otherwise. In the following, we also provide a more computational-oriented definition of structural kernels to more easily describe those we use in our work: Let the set S = {s1, s2, . . . , s|S|} be the substructure space and χi(n) be an indicator function, equal to 1 if the target si is rooted at node n and 1004 equal to 0 otherwise. A structural-kernel function over S1 and S2 is K(S1, S2) = X n1∈NS1 X n2∈NS2 ∆(n1, n2), (2) where NS1 and NS2 are the sets of the S1’s and S2’s nodes, respectively and ∆(n1, n2) = |S| X i=1 χi(n1)χi(n2). (3) The latter is equal to the number of common substructures rooted in the n1 and n2 nodes. In order to have a similarity score between 0 and 1, a normalization in the kernel space, i.e., K(S1,S2) √ K(S1,S1)×K(S2,S2) is usually applied. From a practical computation viewpoint, it is convenient to divide structural kernels in two classes of algorithms working either on trees or graphs. 3.1 The Partial Tree Kernel (PTK) PTK (Moschitti, 2006) generalizes a large class of tree kernels as it computes one of the most general tree substructure spaces. Given two trees S1 and S2, PTK considers any connected subset of nodes as possible feature of the substructure space, and counts how many of them are shared by S1 and S2. Its computation is carried out by Eq. 2 using the following ∆PTK function: if the labels of n1 and n2 are different ∆PTK(n1, n2) = 0; else ∆PTK(n1, n2) = µ  λ2+ X ⃗I1,⃗I2,l(⃗I1)=l(⃗I2) λd(⃗I1)+d(⃗I2) l(⃗I1) Y j=1 ∆P T K(cn1(⃗I1j), cn2(⃗I2j))  where µ, λ ∈[0, 1] are two decay factors, ⃗I1 and ⃗I2 are two sequences of indices, which index subsequences of children u, ⃗I = (i1, ..., i|u|), in sequences of children s, 1 ≤i1 < ... < i|u| ≤|s|, i.e., such that u = si1..si|u|, and d(⃗I) = i|u| − i1 + 1 is the distance between the first and last child. The PTK computational complexity is O(pρ2|NS1||NS2|) (Moschitti, 2006), where p is the largest subsequence of children that we want to consider and ρ is the maximal outdegree observed in the two trees. However the average running time tends to be linear for natural language syntactic trees (Moschitti, 2006). 3.2 Smoothed Partial Tree Kernel (SPTK) Constraining the application of lexical similarity to words embedded in similar structures provides clear advantages over all-vs-all words similarity, which tends to semantically diverge. Indeed, syntax provides the necessary restrictions to compute an effective semantic similarity. SPTK (Croce et al., 2011) generalizes PTK by enabling node similarity during substructure matching. More formally, SPTK is computed by Eq. 2 using the following ∆SPTK(n1, n2) = P|S| i,j=1 χi(n1)χj(n2)Σ(si, sj), where Σ is a similarity between structures1. The recursive definition of ∆SPTK is the following: 1. if n1 and n2 are leaves ∆SPTK(n1, n2) = µλσ(n1, n2); 2. else ∆SPTK(n1, n2) = µσ(n1, n2) ×  λ2+ X ⃗I1,⃗I2,l(⃗I1)=l(⃗I2) λd(⃗I1)+d(⃗I2) l(⃗I1) Y j=1 ∆σ(cn1(⃗I1j), cn2(⃗I2j))  , where σ is any similarity between nodes, e.g., between their lexical labels, and the other variables are the same of PTK. The worst case complexity of SPTK is identical to PTK and in practice is not higher than O(|NS1||NS2|). 3.3 Neighborhood Subgraph Pairwise Distance Kernel (NSPDK) When general subgraphs are used as features in a kernel computation, eq. 1 and 2 become computationally intractable (Gartner et al., 2003). To solve this problem, we need to restrict the set of considered substructures S. (Costa and De Grave, 2010) defined NSPDK such that the feature space is only constituted by pairs of subgraphs (substructures) that are (i) centered in two nodes n1 and n2 such that their distance is not more than D; and (ii) constituted by all nodes (and their edges) at an exact distance h from n1 or n2, where the distance between two nodes is defined as the number of edges in the shortest path connecting them. More formally, let G, NG and EG be a graph and its set of nodes and edges, respectively, the substructure space S = SG(H, D) used by NSPDK in eqs 2 and 3 is: {(γh(n), γh(n′)) : 1 ≤h ≤H, n, n′ ∈NG, d(n, n′) ≤D}, where γh(n) returns the subgraph obtained by executing h steps of a breadth-first visit of G starting from node n and d(n, n′) is the distance between two nodes in the graph. Note that (i) any feature 1Note that this generalizes Eq. 3. 1005 of the space is basically a pair of substructures; and (ii) there is currently no efficient (implicit) formulation for computing such kernel. In contrast, when H and D are limited, it is simple to compute the space SG(H, D) explicitly. In such case, the complexity of the kernel is given by the substructure extraction step, which is O(|NG| × hρ log ρ). 3.4 Kernel Combinations Previous sections have shown three different kernels. Among them, NSPDK is actually an explicit kernel, where the features are automatically extracted with a procedure. In NLP, features are often manually defined by domain experts, who know the linguistic phenomena involved in the task. When available, such features are important as they encode some of the background knowledge on the task. Therefore, combining different feature spaces is typically very useful. Fortunately, kernel methods enable an easy integration of different kernels or feature spaces, i.e., the kernel sum produces the joint feature space and it is still a valid kernel. In the next section, we show representations of text, i.e., structures and features, specific to PI and RTE. 4 Representations for RL from text The kernels described in the previous section can be applied to generic trees and graphs. Automatic feature engineering using structural kernels requires the design of structures for representing data examples that are specific to the learning task we want to tackle. In our case, we focus on RL, which consists in deriving the semantic relation between two entire pieces of text. We focus on two well-understood relations, namely, paraphrasing and textual implications. The tasks are simply defined as: given two texts a1 and a2, automatically classify if (i) a1 is a paraphrase of a2 and/or (ii) a1 implies a2. Although the two tasks are linguistically and conceptually rather different, they can be modeled in a similar way from a shallow representation viewpoint. This is exactly the perspective we would like to keep for showing the advantage of using kernel methods. Therefore, in the following, we define sentence representations that can be suitably used for both tasks and then we rely on structural kernels and the adopted learning algorithm for exploring the substructures relevant to the different tasks. 4.1 Tree Representations An intuitive understanding of our target tasks suggests that syntactic information is essential to achieve high accuracy. Therefore, we consider the syntactic parse trees of the pair of sentences involved in the evaluation. For example, Fig. 1 shows the syntactic constituency trees of the sentences reported in the introduction (these do not include the green label REL and the dashed edges). Given two pairs of sentences, pa = ⟨a1, a2⟩and pb = ⟨b1, b2⟩, an initial kernel for learning the tasks, can be the simple tree kernel sum, e.g., PTK(a1, b1) + PTK(a2, b2) as was defined in (Moschitti, 2008). This kernel works in the space of the union of the sets of all subtrees from the upper and lower trees, e.g.: a1:  [PP [TO [to::t]][NP [QP [$ [$::$]][QP [CD [107.6::c]]]]]], [PP [TO][NP [QP [$][QP [CD [107]]]]]], [PP [TO][NP [QP [QP [CD]]]]], [PP [NP [QP [QP]]]], ... S a2:  [NP [NP [DT [a::d]] [JJ [key::j] NN]][PP]], [NP [NP [DT] [JJ NN]][PP]], [NP [NP [JJ NN]][PP]], [NP [NP [NN]][PP]], [NP [NP [JJ]][PP]], ... However, such features cannot capture the relations between the constituents (or semantic lexical units) from the two trees. In contrast, these are essential to learn the relation between the two entire sentences2. To overcome this problem, in (Zanzotto and Moschitti, 2006), we proposed the use of placeholders for RTE: the main idea was to annotate the matches between the constituents of the two sentences, e.g., 107.6 millions, on both trees. This way the tree fragments in the generated kernel space contained an index capturing the correspondences between a1 and a2. The critical drawback of this approach is that other pairs, e.g., pb, will have in general different indices, making the representation very sparse. Alternatively, we experimented with models that select the best match between all possible placeholder assignments across the two pairs. Although we obtained a good improvement, such solution required an exponential computational time and the selection of the max 2Of course assuming that text meaning is compositional. 1006 ROOT S NP-REL JJ-REL license::j NN revenue::n VP VBD slide::vNP-REL CD-REL 21::c NN-REL percent::n , ,::, ADVB RB however::r , ,::, PP TO to::t NP QP-REL $-REL $::$ QP-REL CD-REL 107.6::c CD-REL million::c . .::. ROOT S NP NP-REL JJ-REL license::j NNS sale::n , ,::, NP NP DT a::d JJ key::j NN measure::n PP IN of::i NP NN demand::n , ,::, VP VBD fall::v NP-REL CD-REL 21::c NN-REL percent::n PP TO to:t NP QP-REL $-REL $::$ QP-REL CD-REL 107.6::c CD-REL million::c . .::. Figure 1: Text representations for PI and RTE: (i) pair of trees, a1 (upper) and a2 (lower), (ii) combined in a graph with dashed edges, and (iii) labelled with the tag REL (in green). The nodes highlighted in yellow constitute a feature for the NSPDK kernel (h = 1, D = 3) centered at the nodes ADVB and NP-REL. assignment made our similarity function a nonvalid kernel. Thus, for this paper, we prefer to rely on a more recent solution we proposed for passage reranking in the QA domain (Severyn and Moschitti, 2012; Severyn et al., 2013a; Severyn et al., 2013b), and for Answer Selection (Severyn and Moschitti, 2013). It consists in simply labeling matching nodes with a special tag, e.g., REL, which indicates the correspondences between words. REL is attached to the father and grandfather nodes of the matching words. Fig. 1 shows several green REL tags attached to the usual POS-tag and constituent node labels of the parse trees. For example, the lemma license is matched by the two sentences, thus both its father, JJ, and its grandfather, NP, nodes are marked with REL. Thanks to such relational labeling the simple kernel, PTK(a1, b1) + PTK(a2, b2), can generate relational features from a1, e.g., [NP [NP-REL [JJ-REL] NN]][PP]], [NP [NP-REL [NN]][PP]], [NP [NP-REL [JJ-REL]][PP]],... If such features are matched in b1, they provide the fuzzy information: there should be a match similar to [NP [NP-REL [JJ-REL]] also between a2 and b2. This kind of matches establishes a sort of relational pair features. It should be noted that we proposed more complex REL tagging policies for Passage Reranking, exploiting additional resources such as Linked Open Data or WordNet (Tymoshenko et al., 2014). Another interesting application of this RL framework is the Machine Translation Evaluation (Guzm´an et al., 2014). Finally, we used a similar model for translating questions to SQL queries in (Giordani and Moschitti, 2012). 4.2 Graph Representations The relational tree representation can capture relational features but the use of the same REL tag for any match between the two trees prevents to deterministically establish the correspondences between nodes. For exactly representing such matches (without incurring in non-valid kernels or sparsity problems), a graph representation is needed. If we connect matching nodes (or also nodes labelled as REL) in Fig. 1 (see dashed lines), we obtain a relational graph. Substructures of such graph clearly indicate how constituents, e.g., NPs, VPs, PPs, from one sentence map into the other sentence. If such mappings observed in a pair of paraphrase sentences are matched in another sentence pair, there may be evidence that also the second pair contains paraphrase sen1007 tences. Unfortunately, the kernel computing the space of all substructures of a graph (even if only considering connected nodes) is an intractable problem as mentioned in Sec. 3.3. Thus, we opt for the use of NSPDK, which generates specific pairs of structures. Intuitively, the latter can capture relational features between constituents of the two trees. Figure 1 shows an example of features generated by the NSPDK with parameters H = 1, D = 3 (the substructures are highlighted in yellow), i.e., [ADVB [VP] [RB]], [NP-REL [VP] [CD-REL] [NN-REL]]. 4.3 Basic Features In addition to structural representations, we also use typical features for capturing the degrees of similarity between two sentences. In contrast, with the previous kernels these similarities are computed intra-pair, e.g., between a1 and a2. Note that any similarity measure generates only one feature. Their description follows: – Syntactic similarities, which apply the cosine function to vectors of n-grams (with n = 1, 2, 3, 4) of word lemmas and part-of-speech tags. – Kernel similarities, which use PTK or SPTK applied to the sentences within the pair. We also used similarity features from the DKPro of the UKP Lab (B¨ar et al., 2012), tested in the Semantic Textual Similarity (STS) task: – Longest common substring measure and Longest common subsequence measure, which determine the length of the longest substring shared by two text segments. – Running-Karp-Rabin Greedy String Tiling provides a similarity between two sentences by counting the number of shuffles in their subparts. – Resnik similarity based on the WordNet hierarchy. – Explicit Semantic Analysis (ESA) similarity (Gabrilovich and Markovitch, 2007) represents documents as weighted vectors of concepts learned from Wikipedia, WordNet and Wiktionary. – Lexical Substitution (Szarvas et al., 2013): a supervised word sense disambiguation system is used to substitute a wide selection of highfrequency English nouns with generalizations, then Resnik and ESA features are computed on the transformed text. 4.4 Combined representations As mentioned in Sec. 3.4, we can combine kernels for engineering new features. Let K be PTK or SPTK, given two pairs of sentences pa = ⟨a1, a2⟩and pb = ⟨b1, b2⟩, we build the following kernel combinations for the RTE task: (i) K+(pa, pb) = K(a1, b1) + K(a2, b2), which simply sums the similarities between the first two sentences and the second two sentences whose implication has to be derived. (ii) An alternative kernel combines the two similarity scores above with the product: K×(pa, pb) = K(a1, b1) · K(a2, b2). (iii) The symmetry of the PI task requires different kernels. The most intuitive applies K between all member combinations and sum all contributions: all+ K(pa, pb)=K(a1, b1) + K(a2, b2) + K(a1, b2) + K(a2, b1). (iv) It is also possible to combine pairs of corresponding kernels with the product: all× K(pa, pb) = K(a1, b1)K(a2, b2) + K(a1, b2)K(a2, b1). (v) An alternative kernel selects only the best between the two products above: MK(pa, pb) = max(K(a1, b1)K(a2, b2), K(a1, b2)K(a2, b1)). This is motivated by the observation that before measuring the similarity between two pairs, we need to establish which ai is more similar to bj. However, the max operator causes MK not to be a valid kernel function, thus we substitute it with a softmax function, which is a valid kernel, i.e., SMK(pa, pb) = softmax(K(a1, b1)K(a2, b2), K(a1, b2)K(a2, b1)), where softmax(x1, x2) = 1 clog(ecx1 + ecx2) (c=100 was accurate enough). The linear kernel (LK) over the basic features (described previously) and/or NSPDK can be of course added to all the above kernels. 5 Experiments 5.1 Setup MSR Paraphrasing: we used the Microsoft Research Paraphrase Corpus (Dolan et al., 2004) consisting of 4,076 sentence pairs in the training set and 1,725 sentence pairs in test set, with a distribution of about 66% between positive and negative 1008 Vs Test 5 Fold Cross Validation Kernel Acc (%) P R F1 Acc (%) P R F1 without REL tagging LK 75.88 0.784 0.881 0.829 75.54 ± 0.45 0.786 ± 0.009 0.876 ± 0.019 0.828 ± 0.004 GK 72.81 0.720 0.967 0.825 72.49 ± 1.22 0.723 ± 0.014 0.957 ± 0.011 0.824 ± 0.008 SMP T K 72.06 0.722 0.943 0.818 72.04 ± 1.08 0.725 ± 0.009 0.940 ± 0.017 0.819 ± 0.009 SMSP T KLSA 72.12 0.722 0.943 0.818 72.56 ± 1.10 0.731 ± 0.010 0.937 ± 0.017 0.821 ± 0.009 SMSP T KW 2V 71.88 0.719 0.946 0.817 72.23 ± 1.07 0.727 ± 0.009 0.938 ± 0.017 0.820 ± 0.009 all× P T K 71.42 0.718 0.939 0.814 71.57 ± 0.86 0.724 ± 0.007 0.933 ± 0.015 0.815 ± 0.008 all× SP T KLSA 72.29 0.725 0.941 0.819 72.06 ± 0.62 0.730 ± 0.007 0.928 ± 0.014 0.817 ± 0.006 all× SP T KW 2V 71.59 0.717 0.947 0.816 71.61 ± 0.76 0.725 ± 0.008 0.931 ± 0.013 0.815 ± 0.007 all+ P T K 70.78 0.716 0.930 0.809 70.76 ± 0.91 0.720 ± 0.008 0.924 ± 0.017 0.809 ± 0.009 all+ SP T KLSA 71.48 0.720 0.934 0.813 71.42 ± 0.91 0.727 ± 0.008 0.920 ± 0.020 0.812 ± 0.009 all+ SP T KW 2V 70.72 0.714 0.935 0.809 71.19 ± 1.22 0.723 ± 0.010 0.927 ± 0.018 0.812 ± 0.011 MP T K 72.17 0.725 0.935 0.817 72.31 ± 0.67 0.731 ± 0.007 0.930 ± 0.015 0.819 ± 0.007 MSP T KLSA 72.00 0.725 0.934 0.816 72.32 ± 0.44 0.732 ± 0.006 0.927 ± 0.014 0.818 ± 0.005 MSP T KW 2V 71.71 0.722 0.933 0.814 71.99 ± 0.96 0.730 ± 0.008 0.926 ± 0.016 0.816 ± 0.008 with REL tagging GK 75.07 0.752 0.933 0.833 74.69 ± 2.52 0.749 ± 0.029 0.940 ± 0.008 0.834 ± 0.018 SMP T K 76.17 0.765 0.927 0.838 75.42 ± 0.86 0.771 ± 0.007 0.903 ± 0.012 0.832 ± 0.008 SMSP T KLSA 76.52 0.767 0.929 0.840 75.62 ± 0.90 0.772 ± 0.007 0.905 ± 0.013 0.833 ± 0.007 SMSP T KW 2V 76.35 0.766 0.929 0.839 75.64 ± 0.77 0.771 ± 0.004 0.907 ± 0.012 0.833 ± 0.007 all× P T K 75.36 0.767 0.905 0.830 74.76 ± 0.71 0.769 ± 0.006 0.892 ± 0.016 0.826 ± 0.008 all× SP T KLSA 75.65 0.770 0.903 0.831 74.83 ± 0.92 0.771 ± 0.009 0.891 ± 0.011 0.826 ± 0.008 all× SP T KW 2V 75.88 0.772 0.905 0.833 75.26 ± 0.81 0.771 ± 0.008 0.898 ± 0.011 0.830 ± 0.008 all+ P T K 74.49 0.762 0.895 0.824 73.99 ± 1.04 0.767 ± 0.010 0.880 ± 0.013 0.820 ± 0.009 all+ SP T KLSA 75.07 0.767 0.899 0.827 73.87 ± 0.85 0.766 ± 0.009 0.880 ± 0.010 0.819 ± 0.007 all+ SP T KW 2V 75.42 0.772 0.894 0.829 74.16 ± 0.75 0.768 ± 0.008 0.882 ± 0.012 0.821 ± 0.007 GK+SMSP T KW 2V 76.70 0.782 0.901 0.837 76.12 ± 0.96 0.787 ± 0.008 0.885 ± 0.015 0.833 ± 0.009 LK+GK 78.67 0.802 0.902 0.849 77.85 ± 1.00 0.804 ± 0.008 0.886 ± 0.015 0.843 ± 0.009 LK+SMSP T KW 2V 77.74 0.794 0.899 0.843 77.52 ± 1.41 0.802 ± 0.011 0.885 ± 0.016 0.841 ± 0.011 LK+GK+SMSP T KW 2V 79.13 0.807 0.901 0.852 78.11 ± 0.94 0.811 ± 0.005 0.879 ± 0.016 0.844 ± 0.009 (Socher et al., 2011) 76.8 − − 0.836 − − − − (Madnani et al., 2012) 77.4 − − 0.841 − − − − Table 1: Results on Paraphrasing Identification examples. These pairs were extracted from topically similar Web news articles, applying some heuristics that select potential paraphrases to be annotated by human experts. RTE-3. We adopted the RTE-3 dataset (Giampiccolo et al., 2007), which is composed by 800 texthypothesis pairs in both the training and test sets, collected by human annotators. The distribution of the examples among the positive and negative classes is balanced. 5.1.1 Models and Parameterization We train our classifiers with the C-SVM learning algorithm (Chang and Lin, 2011) within KeLP3, a Kernel-based Machine Learning platform that implements tree kernels. In both tasks, we applied the kernels described in Sec. 4, where the trees are generated with the Stanford parser4. SPTK uses a node similarity function σ(n1, n2) implemented as follows: if n1 and n2 are two identical syntactic nodes σ = 1. If n1 and n2 are two lexical nodes with the same POS tag, their similarity is evaluated computing the cosine similarity of their corresponding vectors in a wordspace. In all the other cases σ = 0. We generated two different wordspaces. The first is 3https://github.com/SAG-KeLP 4http://nlp.stanford.edu/software/corenlp.shtml a co-occurrence LSA embedding as described in (Croce and Previtali, 2010). The second space is derived by applying a skip-gram model (Mikolov et al., 2013) with the word2vec tool5. SPTK using the LSA will be referred to as SPTKLSA, while when adopting word2vec it will be indicated with SPTKW2V . We used default parameters both for PTK and SPTK whereas we selected h and D parameters of NSPDK that obtained the best average accuracy using a 5-fold cross validation on the training set. 5.1.2 Performance measures The two considered tasks are binary classification problems thus we used Accuracy, Precision, Recall and F1. The adopted corpora have a predefined split between training and test sets thus we tested our models according to such settings for exactly comparing with previous work. Additionally, to better assess our results, we performed a 5fold cross validation on the complete datasets. In case of PI, the same sentence can appear in multiple pairs thus we distributed the pairs such that the same sentence can only appear in one fold at a time. 5https://code.google.com/p/word2vec/ 1009 Vs Test 5 Fold Cross Validation Kernel Acc (%) P R F1 Acc (%) P R F1 without REL tagging LK 62 0.608 0.729 0.663 62.94 ± 5.68 0.635 ± 0.057 0.679 ± 0.083 0.652 ± 0.049 GK 55.375 0.555 0.651 0.599 55.63 ± 1.81 0.564 ± 0.022 0.612 ± 0.087 0.584 ± 0.032 P T K+ 56.13 0.560 0.676 0.612 54.13 ± 3.26 0.547 ± 0.024 0.637 ± 0.051 0.587 ± 0.027 SP T K+ LSA 56.88 0.566 0.683 0.619 53.63 ± 2.50 0.543 ± 0.024 0.622 ± 0.060 0.578 ± 0.027 SP T K+ W 2V 56.63 0.563 0.683 0.617 54.06 ± 2.34 0.546 ± 0.022 0.634 ± 0.060 0.585 ± 0.026 P T K× 55.88 0.558 0.671 0.609 52.81 ± 1.99 0.535 ± 0.025 0.623 ± 0.055 0.574 ± 0.028 SP T K× LSA 56.25 0.561 0.671 0.611 53.56 ± 2.09 0.543 ± 0.022 0.616 ± 0.065 0.576 ± 0.026 SP T K× W 2V 55.25 0.554 0.646 0.597 52.50 ± 1.77 0.533 ± 0.027 0.619 ± 0.071 0.571 ± 0.034 with REL tagging GK 61.63 0.603 0.734 0.662 59.81 ± 3.84 0.599 ± 0.037 0.678 ± 0.071 0.634 ± 0.026 P T K+ 66.00 0.627 0.829 0.714 67.75 ± 7.17 0.655 ± 0.067 0.817 ± 0.038 0.725 ± 0.046 SP T K+ LSA 65.38 0.622 0.824 0.709 67.81 ± 7.30 0.656 ± 0.069 0.816 ± 0.036 0.725 ± 0.047 SP T K+ W 2V 66.38 0.629 0.837 0.718 68.00 ± 7.15 0.658 ± 0.068 0.816 ± 0.039 0.726 ± 0.046 P T K× 66.13 0.629 0.827 0.714 67.75 ± 7.37 0.658 ± 0.071 0.804 ± 0.038 0.722 ± 0.049 SP T K× LSA 66.00 0.629 0.822 0.712 68.00 ± 7.62 0.661 ± 0.074 0.808 ± 0.039 0.725 ± 0.049 SP T K× W 2V 67.00 0.636 0.834 0.722 67.69 ± 6.95 0.658 ± 0.069 0.804 ± 0.040 0.722 ± 0.043 GK+SP T K× W 2V 66.38 0.634 0.815 0.713 66.00 ± 6.79 0.648 ± 0.069 0.769 ± 0.034 0.701 ± 0.044 LK+GK 62.25 0.609 0.737 0.667 62.06 ± 5.49 0.620 ± 0.051 0.702 ± 0.053 0.656 ± 0.036 LK+SP T K× W 2V 66.13 0.628 0.829 0.715 68.25 ± 7.54 0.663 ± 0.076 0.816 ± 0.032 0.728 ± 0.047 LK+GK+SP T K× W 2V 66.00 0.633 0.800 0.707 66.31 ± 7.35 0.652 ± 0.075 0.770 ± 0.053 0.703 ± 0.052 (Zanzotto et al., 2009) 66.75 0.667 − − − − − − (Iftene and Balahur-Dobrescu, 2007) 69.13 − − − − − − − Table 2: Results on Textual Entailment Recognition 5.2 Results on PI The results are reported in Table 1. The first column shows the use of the relational tag REL in the structures (discussed in Sec. 4.1). The second column indicates the kernel models described in sections 3 and 4 as well as the combination of the best models. Columns 3-6 report Accuracy, Precision, Recall and F1 derived on the fixed test set, whereas the remaining columns regard the results obtained with cross validation. We note that: First, when REL information is not used in the structures, the linear kernel (LK) on basic features outperforms all the structural kernels, which all perform similarly. The best structural kernel is the graph kernel, NSPDK (GK in short). This is not surprising as without REL, GK is the only kernel that can express relational features. Second, SPTK is only slightly better than PTK. The reason is mainly due to the approach used for building the dataset: potential paraphrases are retrieved applying some heuristics mostly based on the lexical overlap between sentences. Thus, in most cases, the lexical similarity used in SPTK is not needed as hard matches occur between the words of the sentences. Third, when REL is used on the structures, all kernels reach or outperform the F1 (official measure of the challenge) of LK. The relational structures seem to drastically reduce the inconsistent matching between positive and negative examples, reflecting in remarkable increasing in Precision. In particular, SMSPTKLSA achieves the state of the art6, i.e., 84.1 (Madnani et al., 2012). Next, combining our best models produces a significant improvement of the state of the art, e.g., LK +GK +SMSPTKW 2V outperforms the result in (Madnani et al., 2012) by 1.7% in accuracy and 1.1 points in F1. Finally, the cross-validation experiments confirm the system behavior observed on the fixed test set. The Std. Dev. (specified after the ± sign) shows that in most cases the system differences are significant. 5.3 Results on RTE We used the same experimental settings performed for PI to carry out the experiments on RTE. The results are shown in Table 2 structured in the same way as the previous table. We note that: (i) Findings similar to PI are obtained. (ii) Again the relational structures (using REL) provide a remarkable improvement in Accuracy (RTE challenge measure), allowing tree kernels to compete with the state of the art. This is an impressive result considering that our models do not use any external resource, e.g., as in (Iftene and BalahurDobrescu, 2007). (iii) This time, SPTK× W2V improves on PTK by 1 absolute percent point. 6The performance of the several best systems improved by our models are nicely summarized at http://aclweb. org/aclwiki/index.php?title=Paraphrase_ Identification_(State_of_the_art) 1010 (iv) The kernel combinations are not more effective than SPTK alone. Finally, the cross-fold validation experiments confirm the fixed-test set results. 6 Discussion and Conclusions In this paper, we have engineered and studied several models for relation learning. We utilized state-of-the-art kernels for structures and created new ones by combining kernels together. Additionally, we provide a novel definition of effective and computationally feasible structural kernels. Most importantly, we have designed novel computational structures for trees and graphs, which are for the first time tested in NLP tasks. Our kernels are computationally efficient thus solving one of the most important problems of previous work. We empirically tested our kernels on two of the most representative tasks of RL from text, namely, PI and RTE. The extensive experimentation using many kernel models also combined with traditional feature vector approaches sheds some light on how engineering effective graph and tree kernels for learning from pairs of entire text fragments. In particular, our best models significantly outperform the state of the art in PI and the best kernel model for RTE 3, with Accuracy close to the one of the best system of RTE 3. It should be stressed that the design of previous state-of-the-art models involved the use of several resources, annotation and heavy manually engineering of specific rules and features: this makes the portability of such systems on other domains and tasks extremely difficult. Moreover the unavailability of the used resources and the opacity of the used rules have also made such systems very difficult to replicate. On the contrary, the models we propose enable researchers to: (i) build their system without the use of specific resources. We use a standard syntactic parser, and for some models we use wellknown and available corpora for automatically learning similarities with word embedding algorithms; and (ii) reuse our work for different (similar) tasks (see paraphrasing) and data. The simplicity and portability of our system is a significant contribution to a very complex research area such as RL from two entire pieces of text. Our study has indeed shown that our kernel models, which are very simple to be implemented, reach the state of the art and can be used with large datasets. Furthermore, it should be noted that our models outperform the best tree kernel approach of the RTE challenges (Zanzotto and Moschitti, 2006) and also its extension that we proposed in (Zanzotto et al., 2009). These systems are also adaptable and easy to replicate, but they are subject to an exponential computational complexity and can thus only be used on very small datasets (e.g., they cannot be applied to the MSR Paraphrase corpus). In contrast, the model we proposed in this paper can be used on large datasets, because its kernel complexity is about linear (on average). We believe that disseminating these findings to the research community is very important, as it will foster research on RL, e.g., on RTE, using structural kernel methods. Such research has had a sudden stop as the RTE data in the latest challenges increased from 800 instances to several thousands and no tree kernel model has been enough accurate to replace our computational expensive models (Zanzotto et al., 2009). In the future, it would be interesting defining graph kernels that can combine more than two substructures. Another possible extension regards the use of node similarity in graph kernels. Additionally, we would like to test our models on other RTE challenges and on several QA datasets, which for space constraints we could not do in this work. Acknowledgments This research is part of the Interactive sYstems for Answer Search (IYAS) project, conducted by the Arabic Language Technologies (ALT) group at Qatar Computing Research Institute (QCRI) within the Hamad Bin Khalifa University and Qatar Foundation. References Daniel B¨ar, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. Ukp: Computing semantic textual similarity by combining multiple content similarity measures. In Proc. of SemEval ’12. ACL. Karsten M Borgwardt and Hans-Peter Kriegel. 2005. Shortest-Path Kernels on Graphs. ICDM, 0:74–81. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. In Proc. of CONLL ’05, USA. 1011 Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27. Elisa Cilia and Alessandro Moschitti. 2007. Advanced tree-based kernels for protein classification. In AI*IA 2007: Artificial Intelligence and HumanOriented Computing, 10th Congress of the Italian Association for Artificial Intelligence, Rome, Italy, September 10-13, 2007, Proceedings, pages 218– 229. Fabrizio Costa and Kurt De Grave. 2010. Fast neighborhood subgraph pairwise distance kernel. In ICML, number v. Danilo Croce and Daniele Previtali. 2010. Manifold learning for the semi-supervised induction of framenet predicates: An empirical investigation. Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution kernels on dependency trees. In Proceedings EMNLP. Giovanni Da San Martino, Nicol`o Navarin, and Alessandro Sperduti. 2012. A tree-based kernel for graphs. In Proceedings of the Twelfth SIAM International Conference on Data Mining, Anaheim, California, USA, April 26-28, 2012., pages 975–986. SIAM / Omnipress. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proc. of COLING ’04, Stroudsburg, PA, USA. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine, 31(3):59–79. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipediabased explicit semantic analysis. In Proc. of IJCAI07, pages 1606–1611. Thomas Gartner, P Flach, and S Wrobel. 2003. On Graph Kernels : Hardness Results and Efficient Alternatives. LNCS, pages 129–143. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proc. of the ACLPASCAL RTE ’07 Workshop, pages 1–9. ACL. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28:245–288. Alessandra Giordani and Alessandro Moschitti. 2012. Translating questions to sql queries with generative parsers discriminatively reranked. In COLING (Posters), pages 401–410. Francisco Guzm´an, Shafiq Joty, Llu´ıs M`arquez, Alessandro Moschitti, Preslav Nakov, and Massimo Nicosia. 2014. Learning to differentiate better from worse translations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 214–220, Doha, Qatar, October. Association for Computational Linguistics. M Heinonen, N V¨alim¨aki, V M¨akinen, and J Rousu. 2012. Efficient Path Kernels for Reaction Function Prediction. Bioinformatics Models, Methods and Algorithms. Adrian Iftene and Alexandra Balahur-Dobrescu. 2007. Hypothesis transformation and semantic variability rules used in recognizing textual entailment. In RTE Workshop, Prague. Hisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. 2003. Marginalized kernels between labeled graphs. In ICML, pages 321–328. AAAI Press. Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of NAACL HLT ’12. ACL. Pierre Mah´e and Jean-Philippe Vert. 2008. Graph kernels based on tree patterns for molecules. Machine Learning, 75(1):3–35, October. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL-AFNLP. Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In Proc. of ECML’06, pages 318–329. Alessandro Moschitti. 2008. Kernel methods, syntax and semantics for relational text categorization. In Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM ’08, pages 253–262, New York, NY, USA. ACM. Vivi Nastase, Preslav Nakov, Diarmuid Saghdha, and Stan Szpakowicz. 2013. Semantic Relations Between Nominals. Morgan & Claypool Publishers. Aliaksei Severyn and Alessandro Moschitti. 2012. Structural relationships for large-scale learning of answer re-ranking. In SIGIR. Aliaksei Severyn and Alessandro Moschitti. 2013. Automatic feature engineering for answer selection and extraction. In EMNLP, pages 458–467. Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013a. Building structures from classifiers for passage reranking. In Proceedings of 1012 the 22Nd ACM International Conference on Conference on Information &#38; Knowledge Management, CIKM ’13, pages 969–978, New York, NY, USA. ACM. Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013b. Learning adaptable patterns for passage reranking. Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 75–83. John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press, USA. Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. 2011. Weisfeiler-Lehman Graph Kernels. JMLR, 12:2539–2561. Richard Socher, Eric H. Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y. Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In NIPS 24. Gy¨orgy Szarvas, Chris Biemann, and Iryna Gurevych. 2013. Supervised all-words lexical substitution using delexicalized features. In NAACL. Kateryna Tymoshenko, Alessandro Moschitti, and Aliaksei Severyn. 2014. Encoding semantic resources in syntactic structures for passage reranking. EACL 2014, page 664. S. V. N. Vishwanathan, Karsten M Borgwardt, and Nicol N Schraudolph. 2006. Fast Computation of Graph Kernels. In NIPS. E. Voorhees and D. Tice, 1999. The TREC-8 Question Answering Track Evaluation. Fabio Massimo Zanzotto and Alessandro Moschitti. 2006. Automatic learning of textual entailments with cross-pair similarities. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 401–408, Sydney, Australia, July. Association for Computational Linguistics. Fabio massimo Zanzotto, Marco Pennacchiotti, and Alessandro Moschitti. 2009. A machine learning approach to textual entailment recognition. Nat. Lang. Eng., 15(4):551–582, October. 1013
2015
97
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1014–1023, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning Semantic Representations of Users and Products for Document Level Sentiment Classification Duyu Tang, Bing Qin∗, Ting Liu Harbin Institute of Technology, Harbin, China {dytang, qinb, tliu}@ir.hit.edu.cn Abstract Neural network methods have achieved promising results for sentiment classification of text. However, these models only use semantics of texts, while ignoring users who express the sentiment and products which are evaluated, both of which have great influences on interpreting the sentiment of text. In this paper, we address this issue by incorporating user- and product- level information into a neural network approach for document level sentiment classification. Users and products are modeled using vector space models, the representations of which capture important global clues such as individual preferences of users or overall qualities of products. Such global evidence in turn facilitates embedding learning procedure at document level, yielding better text representations. By combining evidence at user-, product- and documentlevel in a unified neural framework, the proposed model achieves state-of-the-art performances on IMDB and Yelp datasets1. 1 Introduction Document-level sentiment classification is a fundamental problem in the field of sentiment analysis and opinion mining (Pang and Lee, 2008; Liu, 2012). The task is to infer the sentiment polarity or intensity (e.g. 1-5 or 1-10 stars on review sites) of a document. Dominating studies follow Pang et al. (2002; 2005) and regard this problem as a multi-class classification task. They usually ∗Corresponding author. 1The codes and datasets are available at http://ir. hit.edu.cn/˜dytang/ use machine learning algorithms, and build sentiment classifier from documents with accompanying sentiment labels. Since the performance of a machine learner is heavily dependent on the choice of data representations (Domingos, 2012), many works focus on designing effective features (Pang et al., 2002; Qu et al., 2010; Kiritchenko et al., 2014) or learning discriminative features from data with neural networks (Socher et al., 2013; Kalchbrenner et al., 2014; Le and Mikolov, 2014). Despite the apparent success of neural network methods, they typically only use text information while ignoring the important influences of users and products. Let us take reviews with respect to 1-5 rating scales as an example. A critical user might write a review “it works great” and mark 4 stars, while a lenient user might give 5 stars even if he posts an (almost) identical review. In this case, user preference affects the sentiment rating of a review. Product quality also has an impact on review sentiment rating. Reviews towards high-quality products (e.g. Macbook) tend to receive higher ratings than those towards low-quality products. Therefore, it is feasible to leverage individual preferences of users and overall qualities of products to build a smarter sentiment classifier and achieve better performance2. In this paper, we propose a new model dubbed User Product Neural Network (UPNN) to capture user- and product-level information for sentiment classification of documents (e.g. reviews). UPNN takes as input a variable-sized document as well as the user who writes the review and the product which is evaluated. It outputs sentiment polarity label of a document. Users and products are encoded in continuous vector spaces, the representations of which capture important global clues such 2One can manually design a small number of user and product features (Gao et al., 2013). However, we argue that they are not effective enough to capture sophisticated semantics of users and products. 1014 as user preferences and product qualities. These representations are further integrated with continuous text representation in a unified neural framework for sentiment classification. We apply UPNN to three datasets derived from IMDB and Yelp Dataset Challenge. We compare to several neural network models including recursive neural networks (Socher et al., 2013), paragraph vector (Le and Mikolov, 2014), sentimentspecific word embedding (Tang et al., 2014b), and a state-of-the-art recommendation algorithm JMARS (Diao et al., 2014). Experimental results show that: (1) UPNN outperforms baseline methods for sentiment classification of documents; (2) incorporating representations of users and products significantly improves classification accuracy. The main contributions of this work are as follows: • We present a new neural network method (UPNN) by leveraging users and products for document-level sentiment classification. • We validate the influences of users and products in terms of sentiment and text on massive IMDB and Yelp reviews. • We report empirical results on three datasets, and show that UPNN outperforms state-of-the-art methods for sentiment classification. 2 Consistency Assumption Verification We detail the effects of users and products in terms of sentiment (e.g. 1-5 rating stars) and text, and verify them on review datasets. We argue that the influences of users and products include the following four aspects. • user-sentiment consistency. A user has specific preference on providing sentiment ratings. Some users favor giving higher ratings like 5 stars and some users tend to give lower ratings. In other words, sentiment ratings from the same user are more consistent than those from different users. • product-sentiment consistency. Similar with user-sentiment consistency, a product also has its “preference” to receive different average ratings on account of its overall quality. Sentiment ratings towards the same product are more consistent than those towards different products. • user-text consistency. A user likes to use personalized sentiment words when expressing opinion polarity or intensity. For example, a strict user might use “good” to express an excellent attitude, but a lenient user may use “good” to evaluate an ordinary product. Algorithm 1 Consistency Assumption Testing Input: data X, number of users/products m, number of iterations n Output: meaSamek, meaDiffk, 1 ≤k ≤n for k = 1 to n do meaSamek = 0, meaSamek = 0 for i = 1 to m do Sample xi, x+ i , x− i from X meaSamek += measure(xi, x+ i ) meaDiffk += measure(xi, x− i ) end for meaSamek /= m, meaDiffk /= m end for • product-text consistency. Similar with usertext consistency, a product also has a collection of product-specific words suited to evaluate it. For example, people prefer using “sleek” and “stable” to evaluate a smartphone, while like to use “wireless” and “mechanical” to evaluate a keyboard. We test four consistency assumptions mentioned above with the same testing criterion, which is formalized in Algorithm 1. For each consistency assumption, we test it for n = 50 iterations on each of IMDB, Yelp Dataset Challenge 2013 and 2014 datasets. Taking user-sentiment consistency as an example, in each iteration, we randomly select two reviews xi, x+ i written by the same user ui, and a review x− i written by another randomly selected user. Afterwards, we calculate the measurements of (xi, x+ i ) and (xi, x− i ), and aggregate these statistics for m users. In user-sentiment assumption test, we use absolute rating difference ||ratinga −ratingb|| as the measurement between two reviews a and b. We illustrate the results in Figure 1 (a)3, where 2013same/2014same/amzsame (red plots) means that two reviews are written by a same user, and 2013diff/2014diff/amzdiff (black plots) means that two reviews are written by different users. We can find that: the absolute rating differences between two reviews written by a same user are lower than those written by different users (t-test with p-value < 0.01). In other words, sentiment ratings from the same user are more consistent than those from different users. This validates the user-sentiment consistency. For testing product-sentiment consistency, we 3Since the rating scale of IMDB (1-10) is different from Yelp (1-5), we divide the rating difference of IMDB reviews by two for better visualizing and analyzing the results. 1015 2013Same 2013Diff 2014Same 2014Diff IMDBSame IMDBDiff 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 (a) user-sentiment consistency 2013Same 2013Diff 2014Same 2014Diff IMDBSame IMDBDiff 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 (b) product-sentiment consistency 2013Same 2013Diff 2014Same 2014Diff IMDBSame IMDBDiff 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 (c) user-text consistency 2013Same 2013Diff 2014Same 2014Diff IMDBSame IMDBDiff 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 (d) product-text consistency Figure 1: Assumption testing of user-sentiment, product-sentiment, user-text and product-text consistencies. We test them on the datasets from IMDB and Yelp Dataset Challenge in 2013 and 2014. use absolute rating difference as the measurement. The reviews xi, x+ i are towards a same product pi, and x− i is towards another randomly selected product. From Figure 1 (b), we can see that sentiment ratings towards the same product are more consistent than those towards different products. In order to verify the assumptions of user-text and product-text consistencies, we use cosine similarity between bag-of-words of two reviews as the measurement. Results are given in Figure 1 (c) and (d). We can see that the textual similarity between two reviews written by a same user (or towards a same product) are higher than those written by different users (or towards different products). 3 User Product Neural Network (UPNN) for Sentiment Classification We present the details of User Product Neural Network (UPNN) for sentiment classification. An illustration of UPNN is given in Figure 2. It takes as input a review, the user who posts the review, and the product which is evaluated. UPNN captures four kinds of consistencies which are verified in Section 2. It outputs the sentiment category (e.g. 1-5 stars) of a review by considering not only the semantics of review text, but also the information of user and product. In following subsections, we first describe the use of neural network for modeling semantics of variable-sized documents. We then present the methods for incorporating user and product information, followed by the use of UPNN in a supervised learning framework for sentiment classification. 3.1 Modeling Semantics of Document We model the semantics of documents based on the principle of compositionality (Frege, 1892), which states that the meaning of a longer expression (e.g. a sentence or a document) comes from the meanings of its words and the rules used to combine them. Since a document consists of a list of sentences and each sentence is made up of a list of words, we model the semantic representation of a document in two stages. We first produce continuous vector of each sentence from word representations. Afterwards, we feed sentence vectors as inputs to compose document representation. For modeling the semantics of words, we represent each word as a low dimensional, continu1016 Softmax gold rating = 2 w1 h1 Uk Pj h2 hn Lookup Linear …… Convolution Pooling uk pj vd Tanh w1 × × w2 Uk Pj w2 × × wn Uk Pj wn × × Figure 2: An illustration of the neural network approach for sentiment classification. wi means the i-th word of a review text. uk and pj are continuous vector representations of user k and product j for capturing user-sentiment and product-sentiment consistencies. Uk and Pj are continuous matrix representations of user k and product j for capturing user-text and product-text consistencies. ous and real-valued vector, also known as word embedding (Bengio et al., 2003). All the word vectors are stacked in a word embedding matrix Lw ∈Rd×|V |, where d is the dimension of word vector and |V | is the size of word vocabulary. These word vectors can be randomly initialized from a uniform distribution, regarded as a parameter and jointly trained with other parameters of neural networks. Alternatively, they can be pretrained from text corpus with embedding learning algorithms (Mikolov et al., 2013; Pennington et al., 2014; Tang et al., 2014b), and applied as initial values of word embedding matrix. We adopt the latter strategy which better exploits the semantic and grammatical associations of words. To model semantic representations of sentences, convolutional neural network (CNN) and recursive neural network (Socher et al., 2013) are two state-of-the-art methods. We use CNN (Kim, 2014; Kalchbrenner et al., 2014) in this work as it does not rely on external parse tree. Specifically, we use multiple convolutional filters with different widths to produce sentence representation. The reason is that they are capable of capturing local semantics of n-grams of various granularities, which are proven powerful for sentiment classification. The convolutional filter with a width of 3 essentially captures the semantics of trigrams in a sentence. Accordingly, multiple convolutional filters with widths of 1, 2 and 3 encode the semantics of unigrams, bigrams and trigrams in a sentence. An illustration of CNN with three convolutional filters is given in Figure 3. Let us denote a sentence consisting of n words as {w1, w2, ...wi, ...wn}. Each word wi is mapped to its embedding representation ei ∈Rd. A convolutional filter is a list of linear layers with shared parameters. Let lcf be the width of a convolutional filter, and let Wcf, bcf be the shared parameters of linear layers in the filter. The input of a linear layer is the concatenation of word embeddings in a fixed-length window size lcf, which is denoted as Icf = [ei; ei+1; ...; ei+lcf−1] ∈Rd·lcf . The output of a linear layer is calculated as Ocf = Wcf · Icf + bcf (1) where Wcf ∈Rlen×d·lcf , bcf ∈Rlen, len is the output length of linear layer. In order to capture the global semantics of a sentence, we feed the output of a convolutional filter to an average pooling layer, resulting in an output vector with fixedlength. We further add hyperbolic tangent functions (tanh) to incorporate element-wise nonlinearity, and fold (average) their outputs to generate sentence representation. We feed sentence vectors as the input of an average pooling layer to obtain the document representation. Alternative document modeling approaches include CNN or recurrent neural network. However, we prefer average pooling for its computational efficiency and good performance in our experiment. 3.2 Modeling Semantics of Users and Products We integrate semantic representations of users and products in UPNN to capture user-sentiment, 1017 Folding w1 Lookup Convolution Pooling w2 w3 w4 w𝑛−1 w𝑛 Tanh filter1 filter2 filter3 Figure 3: Convolutional neural network with multiple convolutional filters for sentence modeling. product-sentiment, user-text and product-text consistencies. For modeling user-sentiment and productsentiment consistencies, we embed each user as a continuous vector uk ∈Rdu and embed each product as a continuous vector pj ∈Rdp, where du and dp are dimensions of user vector and product vector, respectively. The basic idea behind this is to map users with similar rating preferences (e.g. prefer assigning 4 stars) into close vectors in user embedding space. Similarly, the products which receive similar averaged ratings are mapped into neighboring vectors in product embedding space. In order to model user-text consistency, we represent each user as a continuous matrix Uk ∈ RdU×d, which acts as an operator to modify the semantic meaning of a word. This is on the basis of vector based semantic composition (Mitchell and Lapata, 2010). They regard compositional modifier as a matrix X1 to modify another component x2, and use matrix-vector multiplication y = X1 × x2 as the composition function. Multiplicative semantic composition is suitable for our need of user modifying word meaning, and it has been successfully utilized to model adjectivenoun composition (Clark et al., 2008; Baroni and Zamparelli, 2010) and adverb-adjective composition (Socher et al., 2012). Similarly, we model product-text consistency by encoding each product as a matrix Pj ∈RdP ×d, where d is the dimension of word vector, dP is the output length of product-word multiplicative composition. After conducting user-word multiplication and productword multiplication operations, we concatenate their outputs and feed them to CNN (detailed in Section 3.1) for producing user and product enhanced document representation. 3.3 Sentiment Classification We apply UPNN to document level sentiment classification under a supervised learning framework (Pang and Lee, 2005). Instead of using handcrafted features, we use continuous representation of documents, users and products as discriminative features. The sentiment classifier is built from documents with gold standard sentiment labels. As is shown in Figure 2, the feature representation for building rating predictor is the concatenation of three parts: continuous user representation uk, continuous product representation pj and continuous document representation vd, where vd encodes user-text consistency, product-text consistency and document level semantic composition. We use softmax to build the classifier because its outputs can be interpreted as conditional probabilities. Softmax is calculated as given in Equation 2, where C is the category number (e.g. 5 or 10). softmaxi = exp(xi) PC i′=1 exp(xi′) (2) We regard cross-entropy error between gold sentiment distribution and predicted sentiment distribution as the loss function of softmax. We take the derivative of loss function through back-propagation with respect to the whole set of parameters θ = [W 1,2,3 cf ; b1,2,3 cf ; uk; pj; Uk; Pj; Wsoftmax, bsoftmax], and update parameters with stochastic gradient descent. We set the widths of three convolutional filters as 1, 2 and 3. We learn 200-dimensional sentiment-specific word embeddings (Tang et al., 2014b) on each dataset separately, randomly initialize other parameters from a uniform distribution U(−0.01, 0.01), and set learning rate as 0.03. 4 Experiment We conduct experiments to evaluate UPNN by applying it to sentiment classification of documents. 4.1 Experimental Setting Existing benchmark datasets for sentiment classification such as Stanford Sentiment Treebank (Socher et al., 2013) typically only have text information, but do not contain users who express the sentiment or products which are evaluated. Therefore, we build the datasets by ourselves. In order to obtain large scale corpora without manual annotation, we derive three datasets from IMDB (Diao 1018 Dataset #users #products #reviews #docs/user #docs/product #sents/doc #words/doc IMDB 1,310 1,635 84,919 64.82 51.94 16.08 394.6 Yelp 2014 4,818 4,194 231,163 47.97 55.11 11.41 196.9 Yelp 2013 1,631 1,633 78,966 48.42 48.36 10.89 189.3 Table 1: Statistical information of IMDB, Yelp 2014 and Yelp 2013 datasets used for sentiment classification. The rating scale of IMDB dataset is 1-10. The rating scale of Yelp 2014 and Yelp 2013 datasets is 1-5. |V | is the vocabulary size of words in each dataset. #users is the number of users, #docs/user means the average number of documents per user posts in the corpus. et al., 2014) and Yelp Dataset Challenge4 in 2013 and 2014. Statistical information of the generated datasets are given in Table 1. We split each corpus into training, development and testing sets with a 80/10/10 split, and conduct tokenization and sentence splitting with Stanford CoreNLP (Manning et al., 2014). We use standard accuracy (Manning and Sch¨utze, 1999; Jurafsky and Martin, 2000) to measure the overall sentiment classification performance, and use MAE and RMSE to measure the divergences between predicted sentiment ratings (pr) and ground truth ratings (gd). MAE = P i |gdi −pri| N (3) RMSE = rP i(gdi −pri)2 N (4) 4.2 Baseline Methods We compare UPNN with the following baseline methods for document-level sentiment classification. (1) Majority is a heuristic baseline method, which assigns the majority sentiment category in training set to each review in the test dataset. (2) In Trigram, we use unigrams, bigrams and trigrams as features and train classifier with supported vector machine (SVM) (Fan et al., 2008). (3) In TextFeature, we implement hand-crafted text features including word/character ngrams, sentiment lexicon features, negation features, etc al. (Kiritchenko et al., 2014). (4) We extract user-leniency features (Gao et al., 2013) and corresponding product features (denoted as UPF) from training data, and concatenate them with the features in baseline (2) and (3). (5) We learn word embeddings from training and development sets with word2vec (Mikolov et al., 2013), average word embeddings to get document representation, and train a SVM classifier. 4http://www.yelp.com/dataset_challenge (6) We learn sentiment-specific word embeddings (SSWE) from training and development sets, and use max/min/average pooling (Tang et al., 2014b) to generate document representation. (7) We represent sentence with RNTN (Socher et al., 2013) and compose document representation with recurrent neural network. We average hidden vectors of recurrent neural network as the features for sentiment classification. (8) We re-implement PVDM in Paragraph Vector (Le and Mikolov, 2014) because its codes are not officially provided. The window size is tuned on development set. (9) We compare with a state-of-the-art recommendation algorithm JMARS (Diao et al., 2014), which leverages user and aspects of a review with collaborative filtering and topic modeling. 4.3 Model Comparisons Experimental results are given in Table 2. The results are separated into two groups: the methods above only use texts of review, and the methods below also use user and product information. From the first group, we can see that majority performs very poor because it does not capture any text or user information. SVM classifiers with trigrams and hand-crafted text features are powerful for document level sentiment classification and hard to beat. We compare the word embedding learnt from each corpus with off-theshell general word embeddings5. Results show that tailored word embedding from each corpus performs slightly better than general word embeddings (about 0.01 improvement in terms of accuracy). SSWE performs better than context-based word embedding by incorporating sentiment information of texts. Setting a large window size (e.g. 15) is crucial for effectively training SSWE from documents with accompanying senti5We compare with Glove embeddings learnt from Wikipedia and Twitter http://nlp.stanford.edu/ projects/glove/ 1019 IMDB Yelp 2014 Yelp 2013 Acc MAE RMSE Acc MAE RMSE Acc MAE RMSE Majority 0.196 1.838 2.495 0.392 0.779 1.097 0.411 0.744 1.060 Trigram 0.399 1.147 1.783 0.577 0.487 0.804 0.569 0.513 0.814 TextFeature 0.402 1.134 1.793 0.572 0.490 0.800 0.556 0.520 0.845 AvgWordvec + SVM 0.304 1.361 1.985 0.530 0.562 0.893 0.526 0.568 0.898 SSWE + SVM 0.312 1.347 1.973 0.557 0.523 0.851 0.549 0.529 0.849 Paragraph Vector 0.341 1.211 1.814 0.564 0.496 0.802 0.554 0.515 0.832 RNTN + Recurrent 0.400 1.133 1.764 0.582 0.478 0.821 0.574 0.489 0.804 UPNN (no UP) 0.405 1.030 1.629 0.585 0.483 0.808 0.577 0.485 0.812 Trigram + UPF 0.404 1.132 1.764 0.576 0.471 0.789 0.570 0.491 0.803 TextFeature + UPF 0.402 1.129 1.774 0.579 0.476 0.791 0.561 0.509 0.822 JMARS N/A 1.285 1.773 N/A 0.710 0.999 N/A 0.699 0.985 UPNN (full) 0.435 0.979 1.602 0.608 0.447 0.764 0.596 0.464 0.784 Table 2: Sentiment classification on IMDB, Yelp 2014 and Yelp 2013 datasets. Evaluation metrics are accuracy (Acc, higher is better), MAE (lower is better) and RMSE (lower is better). Our full model is UPNN (full). Our model without using user and product information is abbreviated as UPNN (no UP). The best method in each group is in bold. ment labels. RNTN+Reccurent is a strong performer by effectively modeling document representation with semantic composition. Our text based model (UPNN no UP) performs slightly better than RNTN+Reccurent, trigram and text features. From the second group, we can see that concatenating user product feature (UPF) with existing feature sets does not show significant improvements. This is because the dimension of existing feature sets is typically huge (e.g. 1M trigram features in Yelp 2014), so that concatenating a small number of UPF features does not have a great influence on the whole model. We do not evaluate JMARS in terms of accuracy because JMARS outputs real-valued ratings. Our full model UPNN yields the best performance on all three datasets. Incorporating semantic representations of user and product significantly (t-test with p-value < 0.01) boosts our text based model (UPNN no UP). This shows the effectiveness of UPNN over standard trigrams and hand-crafted features when incorporating user and product information. 4.4 Model Analysis: Effect of User and Product Representations We investigate the effects of vector based user and product representations (uk, pj) as well as matrix based user and product representations (Uk, Pj) for sentiment classification. We remove vector based representations (uk, pj) and matrix based representations (Uk, Pj) from UPNN separately, and conduct experiments on three datasets. From Table 3, we can find that vector based representations (uk, pj) are more effective than matrix based representations (Uk, Pj). This is because uk and pj encode user-sentiment and product-sentiment consistencies, which are more directly associated with sentiment labels than user-text (Uk) and product-text (Pj) consistencies. Another reason might be that the parameters of vector representations are less than the matrix representations, so that the vector representations are better estimated. We also see the contribution from each of user and product by removing (Uk, uk) and (Pj, pj) separately. Results are given in Table 3. It is interesting to find that user representations are obviously more effective than product representations for review rating prediction. 4.5 Discussion: Out-Of-Vocabulary Users and Products Out-of-vocabulary (OOV) situation occurs if a user or a product in testing/decoding process is never seen in training data. We give two natural solutions (avg UP and unk UP) to deal with OOV users and products. One solution (avg UP) is to regard the averaged representations of users/products in training data as the representation of OOV user/product. Another way (unk UP) is to learn a shared “unknown” user/product representation for low-frequency users in training data, and apply it to OOV user/product. 1020 IMDB Yelp 2014 Yelp 2013 Acc MAE RMSE Acc MAE RMSE Acc MAE RMSE UPNN (full) 0.435 0.979 1.602 0.608 0.447 0.764 0.596 0.464 0.784 UPNN −uk −pj 0.409 1.021 1.622 0.585 0.483 0.808 0.572 0.491 0.823 UPNN −Uk −Pj 0.426 0.993 1.607 0.597 0.465 0.789 0.585 0.482 0.802 UPNN −Uk −uk 0.324 1.209 1.743 0.577 0.475 0.778 0.566 0.505 0.828 UPNN −Pj −pj 0.397 1.075 1.712 0.595 0.462 0.776 0.590 0.476 0.802 Table 3: Influence of user and product representations. For user k and product j, uk and pj are their continuous vector representations, Uk and Pj are their continuous matrix representations (see Figure 2). IMDB Yelp 2014 Yelp 2013 0.35 0.4 0.45 0.5 0.55 0.6 0.65 no UP avg UP unk UP full UP Figure 4: Accuracy of OOV user and product on OOV test set. In order to evaluate the two strategies for OOV problem, we randomly select 10 percent users and products from each development set, and mask their user and product information. We run avg UP, unk UP together with UPNN (no UP) which only uses text information, and UPNN (full) which learns tailored representation for each user and product. We evaluate classification accuracy on the extracted OOV test set. Experimental results are given in Figure 5. We can find that these two strategies perform slightly better than UPNN (no UP), but still worse than the full model. 5 Related Work 5.1 Sentiment Classification Sentiment classification is a fundamental problem in sentiment analysis, which targets at inferring the sentiment label of a document. Pang and Lee (2002; 2005) cast this problem a classification task, and use machine learning method in a supervised learning framework. Goldberg and Zhu (2006) use unlabelled reviews in a graphbased semi-supervised learning method. Many studies design effective features, such as text topic (Ganu et al., 2009), bag-of-opinion (Qu et al., 2010) and sentiment lexicon features (Kiritchenko et al., 2014). User information is also used for sentiment classification. Gao et al. (2013) design user-specific features to capture user leniency. Li et al. (2014) incorporate textual topic and user-word factors with supervised topic modeling. Tan et al. (2011) and Hu et al. (2013) utilize usertext and user-user relations for Twitter sentiment analysis. Unlike most previous studies that use hand-crafted features, we learn discriminative features from data. We differ from Li et al. (2014) in that we encode four kinds of consistencies and use neural network approach. User representation is also leveraged for recommendation (Weston et al., 2013), web search (Song et al., 2014) and social media analytics (Perozzi et al., 2014). 5.2 Neural Network for Sentiment Classification Neural networks have achieved promising results for sentiment classification. Existing neural network methods can be divided into two groups: word embedding and semantic composition. For learning word embeddings, (Mikolov et al., 2013; Pennington et al., 2014) use local and global contexts, (Maas et al., 2011; Labutov and Lipson, 2013; Tang et al., 2014b; Tang et al., 2014a; Zhou et al., 2015) further incorporate sentiment of texts. For learning semantic composition, Glorot et al. (2011) use stacked denoising autoencoder, Socher et al. (2013) introduce a family of recursive deep neural networks (RNN). RNN is extended with adaptive composition functions (Dong et al., 2014), global feedbackward (Paulus et al., 2014), feature weight tuning (Li, 2014), and also used for opinion relation detection (Xu et al., 2014). Li et al. (2015) compare the effectiveness of recursive neural network and recurrent neural network on five NLP tasks including sentiment classification. (Kalchbrenner et al., 2014; Kim, 2014; Johnson and Zhang, 2014) use convolutional neural networks. Le and Mikolov (2014) introduce 1021 Paragraph Vector. Unlike existing neural network approaches that only use the semantics of texts, we take consideration of user and product representations and leverage their connections with text semantics for sentiment classification. This work is an extension of our previous work (Tang et al., 2015), which only takes consideration of userword association. 6 Conclusion In this paper, we introduce User Product Neural Network (UPNN) for document level sentiment classification under a supervised learning framework. We validate user-sentiment, productsentiment, user-text and product-text consistencies on massive reviews, and effectively integrate them in UPNN. We apply the model to three datasets derived from IMDB and Yelp Dataset Challenge. Empirical results show that: (1) UPNN outperforms state-of-the-art methods for document level sentiment classification; (2) incorporating continuous user and product representations significantly boosts sentiment classification accuracy. Acknowledgments The authors give great thanks to Furu Wei, Lei Cui, Nan Yang, Jiwei Li, Yaming Sun, Mao Zheng and anonymous reviewers for their valuable comments. We would like to thank Qiming Diao for providing the IMDB dataset as well as the codes of JMARS. This work was supported by the National High Technology Development 863 Program of China (No. 2015AA015407), National Natural Science Foundation of China (No. 61133012 and No. 61273321). Duyu Tang is supported by Baidu Fellowship and IBM Ph.D. Fellowship. References Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In EMNLP, pages 1183–1193. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A compositional distributional model of meaning. In Quantum Interaction Symposium, pages 133–140. Qiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexander J Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In SIGKDD, pages 193–202. ACM. Pedro Domingos. 2012. A few useful things to know about machine learning. Communications of the ACM, 55(10):78–87. Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2014. Adaptive multi-compositionality for recursive neural models with applications to sentiment analysis. In AAAI, pages 1537–1543. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. JMLR. Gottlob Frege. 1892. On sense and reference. Ludlow (1997), pages 563–584. Gayatree Ganu, Noemie Elhadad, and Am´elie Marian. 2009. Beyond the stars: Improving rating predictions using review text content. In WebDB. Wenliang Gao, Naoki Yoshinaga, Nobuhiro Kaji, and Masaru Kitsuregawa. 2013. Modeling user leniency and product popularity for sentiment classification. In ICJNLP, pages 1107–1111. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, pages 513–520. Andrew B Goldberg and Xiaojin Zhu. 2006. Seeing stars when there aren’t many stars: graph-based semisupervised learning for sentiment categorization. In GraphBased Method for NLP, pages 45–52. Xia Hu, Lei Tang, Jiliang Tang, and Huan Liu. 2013. Exploiting social relations for sentiment analysis in microblogging. In WSDM, pages 537–546. Rie Johnson and Tong Zhang. 2014. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint: 1412.1058. Dan Jurafsky and James H Martin. 2000. Speech & language processing. Pearson Education India. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In ACL, pages 655–665. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. Svetlana Kiritchenko, Xiaodan Zhu, and Saif M Mohammad. 2014. Sentiment analysis of short informal texts. Journal of Artificial Intelligence Research, pages 723–762. 1022 Igor Labutov and Hod Lipson. 2013. Re-embedding words. In Annual Meeting of the Association for Computational Linguistics. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML, pages 1188–1196. Fangtao Li, Sheng Wang, Shenghua Liu, and Ming Zhang. 2014. Suit: A supervised user-item based topic model for sentiment analysis. In AAAI, pages 1636–1642. Jiwei Li, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? arXiv preprint:1503.00185. Jiwei Li. 2014. Feature weight tuning for recursive neural networks. Arxiv preprint, 1412.3714. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL, pages 142–150. Christopher D Manning and Hinrich Sch¨utze. 1999. Foundations of statistical natural language processing. MIT press. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL, pages 55–60. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, pages 115– 124. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In EMNLP, pages 79– 86. Romain Paulus, Richard Socher, and Christopher D Manning. 2014. Global belief recursive neural networks. In NIPS, pages 2888–2896. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In SIGKDD, pages 701–710. ACM. Lizhen Qu, Georgiana Ifrim, and Gerhard Weikum. 2010. The bag-of-opinions method for review rating prediction from sparse text patterns. In COLING, pages 913–921. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In EMNLP, pages 1201–1211. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631–1642. Yang Song, Hongning Wang, and Xiaodong He. 2014. Adapting deep ranknet for personalized search. In WSDM, pages 83–92. ACM. Chenhao Tan, Lillian Lee, Jie Tang, Long Jiang, Ming Zhou, and Ping Li. 2011. User-level sentiment analysis incorporating social networks. In SIGKDD, pages 1397–1405. ACM. Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014a. Building large-scale twitter-specific sentiment lexicon: A representation learning approach. In COLING, pages 172–182. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014b. Learning sentimentspecific word embedding for twitter sentiment classification. In ACL, pages 1555–1565. Duyu Tang, Bing Qin, Ting Liu, and Yuekui Yang. 2015. User modeling with neural network for review rating prediction. In IJCAI. Jason Weston, Ron J Weiss, and Hector Yee. 2013. Nonlinear latent factorization by embedding multiple user interests. In RecSys, pages 65–68. ACM. Liheng Xu, Kang Liu, and Jun Zhao. 2014. Joint opinion relation detection using one-class deep neural network. In COLING, pages 677–687. Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2015. Representation learning for aspect category detection in online reviews. In AAAI. 1023
2015
98
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1024–1034, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Towards Debugging Sentiment Lexicons Andrew Schneider Computer and Information Sciences Temple University [email protected] Eduard Dragut Computer and Information Sciences Temple University [email protected] Abstract Central to many sentiment analysis tasks are sentiment lexicons (SLs). SLs exhibit polarity inconsistencies. Previous work studied the problem of checking the consistency of an SL for the case when the entries have categorical labels (positive, negative or neutral) and showed that it is NPhard. In this paper, we address the more general problem, in which polarity tags take the form of a continuous distribution in the interval [0, 1]. We show that this problem is polynomial. We develop a general framework for addressing the consistency problem using linear programming (LP) theory. LP tools allow us to uncover inconsistencies efficiently, paving the way to building SL debugging tools. We show that previous work corresponds to 0-1 integer programming, a particular case of LP. Our experimental studies show a strong correlation between polarity consistency in SLs and the accuracy of sentiment tagging in practice. 1 Introduction Many sentiment analysis algorithms rely on sentiment lexicons (SLs), where word forms or word senses1 are tagged as conveying positive, negative or neutral sentiments. SLs are constructed by one of three methods (Liu, 2012; Feldman, 2013): (1) Manual tagging by human annotators is generally reliable, but because it is labor-intensive, slow, and costly, this method has produced small-sized SLs comprising a few thousand words, e.g., Opinion Finder (OF) (Wilson et al., 2005), Appraisal Lexicon (AL) (Taboada and Grieve, 2004), General Inquirer (GI) (Stone et al., 1966), and MicroWNOp (Cerini et al., 2007). (2) Dictionary1We refer to a string of letters or sounds as a word form & to a pairing of a word form with a meaning as a word sense. based acquisition relies on a set of seed words to expand its coverage to similar words. There are over thirty dictionary-based techniques (Andreevskaia and Bergler, 2006; Blum et al., 2004; Chen and Skiena, 2014; Choi and Wiebe, 2014; Esuli and Sebastiani, 2006; Feng et al., 2013; Hassan and Radev, 2010; Kamps et al., 2004; Mohammad et al., 2009; Takamura et al., 2005; Turney, 2002; Williams and Anand, 2009), most of them based on WordNet (Fellbaum, 1998), such as SentiWordNet (SWN)(Baccianella et al., 2010) and Q-WordNet (QWN) (Agerri and Garc´ıa-Serrano, 2010). (3) Corpus-based acquisition expands a set of seed words with the use of a large document corpus (Breck et al., 2007; Bross and Ehrig, 2013; Choi and Cardie, 2009; Ding et al., 2008; Du et al., 2010; Hatzivassiloglou and McKeown, 1997; Jijkoun et al., 2010; Kaji and Kitsuregawa, 2007; Klebanov et al., 2013; Lu et al., 2011; Peng and Park, 2011; Tang et al., 2014; Wu and Wen, 2010). Method (1) generally produces the most reliable annotations, however the considerable effort required to yield substantial lexicons makes it less useful in practice. The appeals of (2) and (3) lie in the formalism of their models and their capability of producing large-sized SLs. SLs are either word or sense/synset oriented. We refer to the former as Sentiment Word Lexicons (SWLs), e.g., GI, OF, and AL, and to the latter as Sentiment Sense Lexions (SSLs), e.g., SWN, QWN, and Micro-WNOp. Besides the method of compilation, SLs may also vary with regard to sentiment annotation. Polarity disagreements are noted across SLs that do (SWN, Q-WordNet) and do not (AL, GI) reference WordNet. For instance, the adjectives panicky and terrified, have negative and positive polarities in OF, respectively. They each have only one synset which they share in WordNet: “thrown into a state of intense fear or desperation”. Assuming that there is an intrinsic re1024 lationship between the sentiments of a word and its meanings, a single synset polarity assignment to this synset cannot agree with both positive and negative at the word level. If the information given in WordNet is accurate (the Oxford and Cambridge dictionaries give only this meaning for both words) then there must be an annotation inconsistency in OF, called a polarity inconsistency. While some inconsistencies are easy to detect, manual consistency checking of an entire SL is an impractical endeavor, primarily because of the sheer size (SWN has over 206,000 word-sense pairs). Additionally, WordNet’s complex network structure renders manual checking virtually impossible; an instance of a polarity inconsistency may entail an entire sub-network of words and senses. In this paper we develop a rigorous formal method based on linear programming (LP)(Schrijver, 1986) for polarity consistency checking of SLs with accompanying methods to unearth mislabeled words and synsets when consistency is not satisfied. We translate the polarity consistency problem (PCP) into a form of the LP problem, suitable as the input to a standard LP solver, and utilize the functionality available in modern LP software (e.g., identifying an irreducible infeasible subset) to pinpoint the sources of inconsistencies when they occur. In our experimentation we are able to quickly uncover numerous intra- and inter-lexicon inconsistencies in all of the input SLs tested and to suggest lexicon entries for a linguist to focus on in “debugging” the lexicon. Background and Previous Work Sentiment resources have taken two basic approaches to polarity annotation: discrete and fractional. In the discrete approach, polarity is defined to be one of the discrete values positive, negative, or neutral. A word or a synset takes exactly one of the three values. QWN, AL, GI, and OF follow the discrete polarity annotation. In the fractional approach, polarity is defined as a 3-tuple of nonnegative real numbers that sum to 1, corresponding to the positive, negative, and neutral values respectively. SWN, Micro-WNOp, and Hassan and Radev (2010) employ a fractional polarity annotation. For example, the single synset of the adjective admissible in WordNet has the sentiment tags positive in QWN and ⟨.25, .625, .125⟩ in SWN, so here SWN gives a primarily negative polarity with some positive and less neutral polarity. We denote by PCP-D and PCP-F the polarity laughable : positive risible : ? comic : negative s2 : “of or relating to or characteristic of comedy” s3 : “arousing or provoking laughter” s1 : “so unreasonable as to invite derision” 0.5 0.5 1 0.6 0.4 Figure 1: Discrete vs. fractional polarity consistency. Example taken from Dragut et al. (2012). consistency problem for the discrete and fractional polarity annotations, respectively. Dragut et al. (2012) introduces the PCP for domain independent SLs and gives a solution to a particular form of the PCP-D, but that method cannot solve PCP-F. For example, they show that the adjectives laughable, comic, and risible (Figure 1) constitute an inconsistency in the discrete case. AL gives positive polarity for laughable and OF gives negative for comic. If s2 is not positive then laughable is not positive and if s2 is not negative then comic is not negative, so there is no assignment of s2 that satisfies the whole system. Hence there is an inconsistency. However, the following fractional polarity tags do satisfy the system: s1 : ⟨1, 0, 0⟩, s2 : ⟨.66, .34, 0⟩, s3 : ⟨0, 1, 0⟩, where the meaning of the second tag, for instance, is that s2 is .66 positive, .34 negative, and 0 neutral. We thus see that the discrete polarity annotation is rigid and leads to more inconsistencies, whereas the fractional annotation captures more naturally the polarity spectrum of a word or synset. In this paper we give a solution to the PCP-F. The differences between our solution and that of Dragut et al. (2012) give some insight into the general differences between the fractional and discrete problems. First, the discrete case is intractable, i.e., computationally NP-complete (Dragut et al., 2012); we show in this paper (Section 3.2) that the fractional case is tractable (solvable in polynomial time). Second, the PCP-D is solved in Dragut et al. (2012) by translation to the Boolean satisfiability problem (SAT) (Schaefer, 1978); here we recast the PCPF in terms of LP theory. Third, we show that the LP framework is a natural setting for the PCP as a whole, and that the PCP-D corresponds to the 01 integer LP problem (Section 3.2), a classic NPcomplete problem (Karp, 2010). Our experiments (Section 5.4) show that correcting even a small number of inconsistencies can greatly improve the accuracy of sentiment annotation tasks. We implement our algorithm as a versatile tool for debugging SLs, which helps locate the 1025 sources of error in SLs. We apply our algorithm to both SWLs and SSLs and demonstrate the usefulness of our approach to improving SLs. The main contributions of this paper are: • solve the PCP-F; • show that the PCP-F is tractable; • show that the PCP is an instance of LP; • develop a technique for identifying inconsistencies in SLs of various types; • implement our algorithm as a prototype SL debugger; • show that there is a strong correlation between polarity inconsistency in SLs and the performance of sentiment tagging tools developed on them. 2 Problem Definition In this section we give a formal characterization of the polarity assignment of words and synsets in SLs using WordNet. We use −, +, 0 to denote negative, positive, and neutral polarities, respectively, throughout the paper. 2.1 Polarity Representation We define the polarity of a synset or word r in WordNet to be a discrete probability distribution, called a polarity distribution: P+(r), P−(r), P0(r) ≥0 with P+(r) + P−(r) + P0(r) = 1. P+(r), P−(r) and P0(r) represent the “likelihoods” that r is positive, negative or neutral, respectively. For instance, the WordNet synset “worthy of reliance or trust” of the adjective reliable is given the polarity distribution P+ = .375, P−= .0 and P0 = .625 in SentiWordNet. We may drop r from the notation if the meaning is clear from context. The use of a polarity distribution to describe the polarity of a word or synset is shared with many previous works (Andreevskaia and Bergler, 2006; Baccianella et al., 2010; Kim and Hovy, 2006). 2.2 WordNet A word-synset network N is a 4-tuple (W, S, E, f) where W is a finite set of words, S is a finite set of synsets, E ⊆W × S and f is a function assigning a positive integer to each element in E. For any word w and synset s, s is a synset of w if (w, s) ∈E. For a pair (w, s) ∈E, f(w, s) is called the frequency of use of w in the sense given by s. For a word w, we let freq(w) denote the sum of all f(w, s) such that (w, s) ∈E. We define the relative frequency of w with s by rf(w, s) = f(w,s) freq(w). If f(w, s) = 0, the frequency of each synset of w is increased by a small constant ϵ. We use ϵ = .1 in our prototype. 2.3 Word Polarities We contend that there exists a relation between the sentiment orientation of a word and the polarities of its related senses (synsets), and we make the assumption that this relation takes the form of a linear function. Thus, for w ∈W and p ∈{+, −, 0}, the polarity distribution of w is defined as: Pp(w) = X s∈Sw g(w, s) · Pp(s), (1) where Pp(s) is the polarity value of synset s with polarity p and g(w, s) is a rational number. For example, g can be the relative frequency of s with respect to w in WordNet: g(w, s) = rf(w, s); ∀w ∈W, s ∈S. Alternatively, for each word w we can draw g(w, ·) from a Zipfian distribution, following the observation that the distribution of word senses roughly follows a Zipfian power-law (Kilgarriff, 2004; Sanderson, 1999). In this paper, we will assume g(w, s) = rf(w, s). For example, the three synsets of the adjective reliable with relative frequencies 9 11, 1 11, and 1 11, respectively, are given the distributions ⟨.375, 0, .625⟩, ⟨.5, 0, .5⟩, and ⟨.625, 0, .375⟩in SentiWordNet. So for reliable we have P+ = 9 110.375 + 1 110.5 + 1 110.625 ≈0.41, P−= 0, and P0 = 9 110.625 + 1 110.5 + 1 110.375 ≈0.59. 2.4 Modeling Sentiment Orientation in SLs Words and synsets have unique polarities in some SLs, e.g., AL and OF. For instance, reliable has positive polarity in AL, GI, and OF. The question is: what does a discrete annotation of reliable tell us about its polarity distribution? One might take it to mean that the polarity distribution is simply ⟨1, 0, 0⟩. This contradicts the information in SWN, which gives some neutral polarity for all of the synsets of reliable. So a better polarity distribution would allow P0 > 0. Furthermore, given that ⟨.41, 0, .59⟩, ⟨.40, 0, .60⟩, and ⟨.45, 0, .55⟩give virtually identical information to a sentiment analyst, it seems unreasonable to expect exactly one to be the correct polarity tag for reliable and the other two incorrect. Therefore, instead of claiming to pinpoint an exact polarity distribution for a word, we propose to set a boundary on its variation. This establishes a 1026 range of values, instead of a single point, in which SLs can be said to agree. Thus, for a word w, we can define polarity(w) =  + if P+ > P− − if P−> P+ (2) which we refer to as MAX POL. This model is adopted either explicitly or implicitly by numerous works (Hassan and Radev, 2010; Kim and Hovy, 2004; Kim and Hovy, 2006; Qiu et al., 2009). Another model is the majority sense model, called MAJORITY, (Dragut et al., 2012), where polarity(w) =  + if P+ > P−+ P0 − if P−> P+ + P0 (3) Another polarity model, MAX, is defined as polarity(w)=  + if P+ >P−& P+ >P0 −if P−>P+ & P−>P0 (4) For instance, reliable conveys positive polarity according to MAX POL, since P+ > P−, but neutral according to MAJORITY. When the condition of being neither positive nor negative can be phrased as a conjunction of linear inequalities, as is the case with MAJORITY and MAX POL, then we define neutral as not positive and not negative. These model definitions can be applied to synsets as well. 2.5 Polarity Consistency Definition Instead of defining consistency for SLs dependent on a choice of model, we develop a generic definition applicable to a wide variety of models, including all of those discussed above. We require that the polarity of a word or synset in the network N be characterized by a set of linear inequalities (constraints) with rational coefficients. Formally, for each word w ∈W, the knowledge that w has a discrete polarity p ∈{+, −, 0} is characterized by a set of linear inequalities: ψ(w, p) = {ai,0P+ +ai,1P−+ai,2P0 ⪯bi}, (5) where ⪯∈{≤, <} and ai,0, ai,1, ai,2, bi ∈Q, i = 0, 1, . . . , m. For instance, if the MAX model is used, for w = worship whose polarity is positive in OF, we get the following set of inequalities: ψ(w, +) = {P+−P−> 0, P+−P0 > 0} = {(−1)P++1P−+0P0 <0, (−1)P++0P−+1P0 <0}. Let L be an SL. We denote the system of inequalities introduced by all words and synsets in L with known polarities in the network N by Ψ′(N, L). The variables in Ψ′(N, L) are perseverance w1 : + persistence w2 : 0 pertinacity w3 : − tenacity w4 : + s1 : “persistent determination” s3 : “the property of a continuous period of time” s2 : “the act of persisting or persevering” 0.5 0.29 1 1 0.5 0.7 0.01 Figure 2: A network of 4 words and 3 synsets P+(r), P−(r) and P0(r), r ∈W ∪S. Denote by Υ′(N, L) the set of constraints implied by the polarity distributions for all r ∈L: P+(r)+P−(r)+ P0(r) = 1 and Pp∈{+,−,0}(r) ≥0, ∀r ∈W ∪S. Let Φ′(N, L) = Ψ′(N, L) ∪Υ′(N, L). Example 1. Let w1, w2, w3, and w4 be the nouns perseverance, persistence, pertinacity, and tenacity, respectively, which are in OF with polarities +, 0, −, and +, respectively (Figure 2). Assuming the MAJORITY model, ψ(w1, +) = {P+(w1) > P−(w1) + P0(w1)} = {P+(w1) > 1 − P+(w1)} = {−P+(w1) < −1 2}, and ψ(w2, 0) = {P+(w2) ≤ P−(w2) + P0(w2), P−(w2) ≤ P+(w2) + P0(w2)} = {P+(w2) ≤1 2, P−(w2) ≤ 1 2}. Similarly, ψ(w3, −) = {−P−(w3) < −1 2} and ψ(w4, −) = {−P+(w4) < −1 2}. Definition 1. A sentiment lexicon L is consistent if the system Φ′(N, L) is feasible, i.e., has a solution. The PCP is then the problem of deciding if a given SL L is consistent. In general, PCP can be stated as follows: Given an assignment of polarities to the words, does there exist an assignment of polarities to the synsets that agrees with that of the words? If the polarity annotation is discrete, we have the PCP-D; if the polarity is fractional, we have PCP-F. Our focus is PCP-F in this paper. The benefits of a generic problem model are at least two-fold. First, different linguists may have different views about the kinds of inequalities one should use to express the probability distribution of a word with a unique polarity in some SL. The new model can accommodate divergent views as long as they are expressed as linear constraints. Second, the results proven for the generic model will hold for any particular instance of the model. 3 Polarity Consistency: an LP Approach A careful analysis of the proposed formulation of the problem of SL consistency checking reveals that this can be naturally translated into an LP problem. The goal of LP is the optimization of a linear objective function, subject to lin1027 ear (in)equality constraints. LP problems are expressed in standard form as follows: minimize cTx subject to Ax ≤b (6) and x ≥0 x represents the vector of variables (to be determined), c and b are vectors of (known) coefficients, A is a (known) matrix of coefficients, and (·)T is the matrix transpose. An LP algorithm finds a point in the feasible region where cTx has the smallest value, if such a point exists. The feasible region is the set of x that satisfy the constraints Ax ≤b and x ≥0. There are several non-trivial challenges that need to be addressed in transforming our problem (i.e., the system Φ′(N, L)) into an LP problem. For instance, we have both strict and weak inequalities in our model, whereas standard LP does not include strict inequalities. We describe the steps of this transformation next. 3.1 Translation to LP In our problem, x is the concatenation of all the triplets ⟨P+(r), P−(r), P0(r)⟩for all r ∈W ∪S. Eliminate Word Related Variables. For each word w ∈L we replace P+(w), P−(w) and P0(w) with their corresponding expressions according to Equation 1; then the linear system Φ′(N, L) has only the synset variables P+(s), P−(s) and P0(s) for s ∈S. Example (continued). Using the relative frequencies of Figure 2 in Equation 1 we get: ψ(w1,+)= {−.5P+(s1) −.5P+(s2) < −1 2}, ψ(w2,0)={.29P+(s1)+.01P+(s2)+.7P+(s3)≤1 2, .29P−(s1) + .01P−(s2) + .7P−(s3) ≤1 2}, ψ(w3,−)= {−P−(s1) < −1 2}, and ψ(w4,+)= {−P+(s1) < −1 2}. Equality. The system Φ′(N, L) contains constraints of the form P+(s)+P−(s)+P0(s)=1 for each s ∈S, but observe that there are no equality constraints in the standard LP form (Equation 6). The usual conversion procedure is to replace a given equality constraint: aTx=b, with: aTx≤b and −aTx ≤−b. However, this procedure increases the number of constraints in Φ′(N, L) linearly. This can have a significant computation impact since Φ′(N, L) may have thousands of constraints (see discussion in Section 5.3). Instead, we can show that the system F obtained by performing the following two-step transformation is equivalent to Φ′(N, L), in the sense that F is feasible iff Φ′(N, L) is feasible. For every s ∈S, (Step 1) we convert each P+(s)+P−(s)+P0(s)=1 to P+(s)+P−(s)≤1, and (Step 2) we replace every P0(s) in Φ′(N, L) with 1 −P+(s) −P−(s). Strict Inequalities. Strict inequalities are not allowed in LP and their presence in inequality systems in general poses difficulties to inequality system solvers (Goberna et al., 2003; Goberna and Rodriguez, 2006; Ghaoui et al., 1994). Fortunately results developed by the LP community allow us to overcome this obstacle and maintain the flexibility of our proposed model. We introduce a new variable y ≥0, and for every strict constraint of the form aTx < b, we rewrite the inequality as aTx + y ≤b. Let Φ′′(N, L) be this new system of constraints. We modify the objective function (previously null) to maximize y (i.e., minimize −y). Denote by F ′ the LP that maximizes y subject to Φ′′(N, L). We can show that Φ′(N, L) is feasible iff F ′ is feasible and y ̸= 0. A sketch of the proof is as follows: if y > 0 then aTx + y ≤b implies aTx < b. Conversely, if aTx < b then ∃y > 0 such that aTx + y ≤b, and maximizing for y will yield a y > 0 iff one is feasible. This step is omitted if we have no strict constraints in Φ′(N, L). Example (continued). The formulations of ψ(w1, +), ψ(w3, −), and ψ(w4, +) involve strict inequalities, so they are rewritten in Φ′′(N, L), e.g., ψ′′(w4, +) = {−P+(s1) + y ≤−1 2}. We denote by Φ(N, L) the standard form of Φ′(N, L) obtained by applying the above steps. This is the input to an LP solver. Theorem 1. Sentiment lexicon L is polarity consistent iff Φ(N, L) is feasible. 3.2 Time Complexity For the network N and an SL L, the above translation algorithm converts the PCP into an LP problem on the order of O(|E|), a polynomial time conversion. The general class of linear programming problems includes subclasses that are NP-hard, such as the integer linear programming (ILP) problems, as well as polynomial solvable subclasses. We observe that our problem is represented by a system of rational linear inequalities. This class of LP problems is solvable in polynomial time (Khachiyan, 1980; G´acs and Lov´asz, 1981). This (informally) proves that the PCP-F is solvable in polynomial time. PCP is NP-complete in the discrete case (Dragut et al., 2012). This is not surprising since in our LP formulation of the 1028 PCP, the discrete case corresponds to the 0-1 integer programming (BIP) subclass. (Recall that in the discrete case each synset has a unique polarity.) BIP is the special case of integer programming where variables are required to be 0 or 1. BIP is a classic NP-hard problem (Garey and Johnson, 1990). We summarize these statements in the following theorem. Theorem 2. The PCP-F problem is P and the PCP-D is NP-complete. We proved a more general and more comprehensive result than Dragut et al. (2012). The PCP solved by Dragut et al. (2012) is a particular case of PCP-D: it can be obtained by instantiating our framework with the MAJORITY model (Equation 3) and requiring each synset to take a unique polarity. We believe that the ability to encompass both fractional and discrete cases within one framework, that of LP, is an important contribution, because it helps to give structure to the general problem of polarity consistency and to contextualize the difference between the approaches. 4 Towards Debugging SLs Simply stating that an SL is inconsistent is of little practical use unless accompanying assistance in diagnosing and repairing inconsistencies is provided. Automated assistance is necessary in the face of the scale and complexity of modern SLs: e.g., AL has close to 7,000 entries, SWN annotates the entirety of WordNet, over 206,000 word-sense pairs. There are unique and interesting problems associated with inconsistent SLs, among them: (1) isolate a (small) subset of words/synsets that is polarity inconsistent, but becomes consistent if one of them is removed; we call this an Irreducible Polarity Inconsistent Subset (IPIS); (2) return an IPIS with smallest cardinality (intuitively, such a set is easiest to repair); (3) find all IPISs, and (4) find the largest polarity consistent subset of an inconsistent SL. In the framework of linear systems of constraints, the problems (1) - (4) correspond to (i) the identification of an Irreducible Infeasible Subset (IIS) of constraints within Φ(N, L), (ii) finding IIS of minimum cardinality, (iii) finding all IISs and (iv) finding the largest set of constraints in Φ(N, L) that is feasible, respectively. An IIS is an infeasible subset of constraints that becomes feasible if any single constraint is removed. Problems (ii) - (iv) are NP-hard and some may even be difficult to approximate (Amaldi and Kann, 1998; Chinneck, 2008; Chakravarti, 1994; Tamiz et al., 1996). We focus on problem (1) in this paper, which we solve via IIS discovery. We keep a bijective mapping from words and synsets to constraints such that for any given constraint, we can uniquely identify the word or synset in Φ(N, L) from which it was introduced. Hence, once an IIS is isolated, we know the corresponding words or synsets. Modern LP solvers typically can give an IIS when a system is found to be infeasible, but none give all IISs or the IIS of minimum size. Example (continued). The polarity assignments of w1, w2, w3, and w4, are consistent iff there exist polarity distributions ⟨P+(si), P−(si), P0(si)⟩for i = 1, 2, 3, such that: y > 0 ψ(w1, +): −.5P+(s1) + .5P+(s2) + y ≤−1 2, ψ(w2,0):.29P+(s1) + .01P+(s2) + .7P+(s3)≤1 2 AND .29P−(s1) + .01P−(s2) + .7P−(s3)≤1 2, ψ(w3, −) : −P−(s1) + y ≤−1 2, ψ(w4, +) : −P+(s1) + y ≤−1 2, υ(s1):P+(s1)+P−(s1)≤1AND P+(s1), P−(s1)≥0, υ(s2):P+(s2)+P−(s2)≤1AND P+(s2), P−(s2)≥0, υ(s3):P+(s3)+P−(s3)≤1AND P+(s3), P−(s3)≥0. Upon examination, if y > 0, then ψ(w3, −) implies P−(s1) > 1 2 and ψ(w4, +) implies P+(s1) > 1 2. Then P+(s1) + P−(s1) > 1, contradicting υ(s1). Hence, this LP system is infeasible. Moreover {ψ(w3, −), ψ(w4, +), υ(s1)} is an IIS. Tracing back we get that the set of words {w3, w4} is inconsistent. Therefore it is an IPIS. Isolating IPISs helps focus SL diagnosis and repair efforts. Fixing SLs via IIS isolation proceeds iteratively: (1) isolate an IIS, (2) determine a repair for this IIS, (3) if the model is still infeasible, go to step (1). This approach is well summarized by Greenberg’s aphorism: “diagnosis = isolation + explanation” (Greenberg, 1993). The proposed use requires human interaction to effect the changes to the lexicon. One might ask if this involvement is strictly necessary; in response we draw a parallel between our SL debugger and a software debugger. A software debugger can identify a known programming error, say the use of an undefined variable. It informs the programmer, but it does not assign a value to the variable itself. It requires the user to make the desired assignment. Similarly, our debugger can deterministically identify an inconsistent component, but it cannot deterministically decide which elements to adjust. In most cases, this is simply not an objective decision. To illustrate this point, from our example, we know that minimally one 1029 SL adj. adv. noun verb total UN 3,084 940 2,340 1,812 8,176 AL 1,486 377 2 0 1,865 GI 1,337 121 1,474 1,050 3,982 SWLs OF 2,608 775 1,907 1,501 6,791 SWN 18,156 3,621 82,115 13,767 117,659 QWN 4,060 40 7,404 4,006 15,510 SSLs MWN 255 30 487 283 1,055 Table 1: Counts of words/synsets in each SL of pertinacity(−) and tenacity(+) must be adjusted, but the determination as to which requires the subjective analysis of a domain expert. In this paper, we do not repair any of the discovered inconsistencies. We focus on isolating as many IPISs as possible. 5 Experiments The purpose of our experimental work is manifold, we show that: (1) inconsistencies exist in and between SLs, (2) our algorithm is effective at uncovering them in the various types of SLs proposed in the literature, (3) fractional polarity representation is more flexible than discrete, giving orders of magnitude fewer inconsistencies, and (4) sentiment analysis is significantly improved when the inconsistencies of a basis SL are corrected. Experiment Setup: We use four SWLs: GI, AL, OF and their union, denoted UN, and three SSLs: QWN, SWN and MicroWN-Op. The distribution of their entries is given in Table 1. The MAJORITY model (Equation 3) is used in all trials. This allows for direct comparison with Dragut et al. (2012). We implemented our algorithm in Java interfacing with the GUROBI LP solver2, and ran the tests on a 4 × 1.70GHz core computer with 6GB of main memory. 5.1 Inconsistencies in SWLs In this set of experiments, we apply our algorithm to GI, AL, OF and UN. We find no inconsistencies in AL, only 2 in GI, and 35 in both UN and OF (Table 2). (Recall that an inconsistency is a set of words whose polarities cannot be concomitantly satisfied.) These numbers do not represent all possible inconsistencies (See discussion in Section 4). In general, the number of IISs for an infeasible system can be exponential in the size of the system Φ(N, L) (Chakravarti, 1994), however our results suggest that in practice this does not occur. Compared with Dragut et al. (2012), we see a marked decrease in the number of inconsistencies. 2www.gurobi.com adj. adv. noun verb total UN 8 14 5 8 35 AL 0 0 0 0 GI 2 0 0 0 2 OF 7 15 4 9 35 Table 2: SWL-Internal Inconsistencies Inconsistency Ratios SWL adj. adv. noun verb total UN 0.67 0.89 0.85 0.81 0.78 AL 0.63 0.8 1 0.66 GI 0.6 0.41 0.87 0.91 0.78 OF 0.66 0.87 0.82 0.77 0.76 Table 3: SentiWordNet paired with SWLs They found 249, 2, 14, and 240 inconsistencies in UN, AL, GI, and OF, respectively. These inconsistencies are obtained in the first iteration of their SAT-Solver. This shows that about 86% of inconsistent words in a discrete framework can be made consistent in a fractional system. 5.2 Inconsistencies in SSLs In this set of experiments we check the polarity inconsistencies between SWLs and SSLs. We pair each SSL with each of the SWLs. SentiWordNet. SWN is an automatically generated SL with a fractional polarity annotation of every synset in WordNet. Since SWN annotates every synset in WordNet, there are no free variables in this trial. Each variable Pp∈{+,−,0}(s) for s ∈S is fully determined by SWN, so this amounts to a constant on the left hand side of each inequality. Our task is to simply check whether the inequality holds between the constant on the left and that on the right. Table 3 gives the proportion of words from each SWL that is inconsistent with SWN. We see there is substantial disagreement between SWN and all of the SWLs, in most cases more than 70% disagreement. For example, 5,260 of the 6,921 words in OF do not agree with the polarities assigned to their senses in SWN. This outcome is deeply surprising given that all these SLs are domain independent – no step in their construction processes hints to a specific domain knowledge. This opens up the door to future analysis of SL acquisition. For instance, examining the impact that model choice (e.g., MAJORITY vs. MAX) has on inter-lexicon agreement. Q-WordNet. QWN gives a discrete polarity for 15,510 WordNet synsets. When a synset is annotated in QWN, its variables, Pp∈{+,−,0}(s), are assigned the QWN values in Φ; a feasible assignment is sought for the remaining free variables. An inconsistency may occur among a set of words, or 1030 UN AL GI OF total 345 34 139 325 Table 4: Q-WordNet paired with SWLs. a set of words and synsets. Table 4 depicts the outcome of this study. We obtain 345 inconsistencies between QWN and UN. The reduced number of inconsistencies with AL (34) is explained by their limited “overlay” (QWN has only 40 adverb synsets). Dragut et al. (2012) reports 455 inconsistencies between QWN and UN, 110 more than we found here. Again, this difference is due to the rigidity of the discrete case, which leads to more inconsistencies in general. Micro-WNOp. This is a fractional SSL of 1,105 synsets from WordNet manually annotated by five annotators. The synsets are divided into three groups: 110 annotated by the consensus of the annotators, 496 annotated individually by three annotators, and 499 annotated individually by two annotators. We take the average polarities of groups 2 and 3 and include this data as two additional sets of values. Table 5 gives the inconsistencies per user in each group. For Groups 2 and 3, we give the average number of inconsistencies among the users (Avg. Incons. in Table 5) as well as the inconsistencies of the averaged annotations (Avg. User in Table 5). Micro-WNOp gives us an opportunity to analyze the robustness of our method by comparing the number of inconsistencies of the individual users to that of the averaged annotation. Intuitively, we expect that the average number of inconsistencies in a group of users to be close to the number of inconsistencies for the user averaged annotations. This is clearly apparent from Table 5, when comparing Lines 4 and 5 in Group 2 and Lines 3 and 4 in Group 3. For example, Group 2 has an average of 68 inconsistencies for OF, which is very close to the number of inconsistencies, 63, obtained for the group averaged annotations. This study suggests a potential application of our algorithm: to estimate the confidence weight (trust) of a user’s polarity annotation. A user with good polarity consistency receives a higher weight than one with poor polarity consistency. This can be applied in a multi-annotator SL scenario. 5.3 Computation We provide information about the runtime execution of our method in this section. Over all of our experiments, the resulting systems of constraints can be as small as 2 constraints with 2 variables UN AL GI OF Common 45 3 13 43 User 1 88 10 59 75 User 2 50 8 24 48 User 3 97 12 64 82 Avg. Incons. 78 10 49 68 Group 2 Avg. User 1,2,3 69 8 40 63 User 4 72 9 46 60 User 5 70 8 46 59 Avg. Incons. 71 9 46 60 Group 3 Avg. User 4,5 68 8 42 57 Table 5: Micro-WNOp – SWD Inconsistencies and as large as 3,330 constraints with 4,946 variables. We achieve very good overall execution times, 68 sec. on average. At its peak, our algorithm requires 770MB of memory. Compared to the SAT approach by Dragut et al. (2012), which takes about 10 min. and requires about 10GB of memory, our method is several orders of magnitude more efficient and more practical, paving the way to building practical SL debugging tools. 5.4 Inconsistency & Sentiment Annotation This experiment has two objectives: (1) show that two inconsistent SLs give very different results when applied to sentiment analysis tasks and (2) given an inconsistent SL D, and D′ an improved version of D with fewer inconsistencies, show that D′ gives better results than D in sentiment analysis tasks. We use a third-party sentiment annotation tool that utilizes SLs, Opinion Parser (Liu, 2012). We give the instantiations of D below. In (1), we use the dataset aclImdb (Maas et al., 2011), which consists of 50,000 reviews, and the SLs UN and SWN. Let UN′ and SWN′ be the subsets of UN and SWN, respectively, with the property that they have the same set of (word, pos) pair entries and word appears in aclImdb. UN′ and SWN′ have 6,003 entries. We select from aclImdb the reviews with the property that they contain at least 50 words in SWN′ and UN′. This gives 516 negative and 567 positive reviews, a total of 1,083 reviews containing a total of 31,701 sentences. Opinion Parser is run on these sentences using SWN′ and UN′. We obtain that 16,741 (52.8%) sentences acquire different polarities between the two SLs. In (2), we use 110 randomly selected sentences from aclImdb, which we manually tagged with their overall polarities. We use OF and OF′, where OF′ is the version of OF after just six inconsistencies are manually fixed. We run Opinion Parser on these sentences using OF and OF′. We obtain an accuracy of 42% with OF and 47% with OF′, an 1031 improvement of 8.5% for just a small fraction of corrected inconsistencies. These two experiments show a strong correlation between polarity inconsistency in SLs and its effect on sentiment tagging in practice. 6 Conclusion Resolving polarity inconsistencies helps to improve the accuracy of sentiment analysis tasks. We show that LP theory provides a natural framework for the polarity consistency problem. We give a polynomial time algorithm for deciding whether an SL is polarity consistent. If an SL is found to be inconsistent, we provide an efficient method to uncover sets of words or word senses that are inconsistent and require linguists’ attention. Effective SL debugging tools such as this will help in the development of improved SLs for use in sentiment analysis tasks. 7 Acknowledgments We would like to thank Bing Liu for running the experiments of Section 5.4 on his commercial tool Opinion Parser, Christiane Fellbaum for the discussions on polarity inconsistency, and Prasad Sistla for the discussions on linear programming. We would also like to thank the reviewers for their time, effort, and insightful feedback. References Rodrigo Agerri and Ana Garc´ıa-Serrano. 2010. Qwordnet: Extracting polarity from wordnet senses. In LREC. Edoardo Amaldi and Viggo Kann. 1998. On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems. Theoretical Computer Science, 209. A. Andreevskaia and S. Bergler. 2006. Mining wordnet for fuzzy sentiment: Sentiment tag extraction from wordnet glosses. In EACL. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining. In LREC. Avrim Blum, John Lafferty, Mugizi Robert Rwebangira, and Rajashekar Reddy. 2004. Semi-supervised learning using randomized mincuts. In ICML. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In IJCAI. Juergen Bross and Heiko Ehrig. 2013. Automatic construction of domain and aspect specific sentiment lexicons for customer review mining. In CIKM. S. Cerini, V. Compagnoni, A. Demontis, M. Formentelli, and G. Gandini, 2007. Language resources and linguistic theory: Typology, second language acquisition, English linguistics., chapter Micro-WNOp: A gold standard for the evaluation of automatically compiled lexical resources for opinion mining. Franco Angeli Editore, Milano, IT. Nilotpal Chakravarti. 1994. Some results concerning post-infeasibility analysis. European Journal of Operational Research, 73(1). Yanqing Chen and Steven Skiena. 2014. Building sentiment lexicons for all major languages. In ACL. John W Chinneck. 2008. Feasibility and infeasibility in optimization: algorithms and computational methods. International Series in Operations Research and Management Science. Springer, Dordrecht. Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In EMNLP. Yoonjung Choi and Janyce Wiebe. 2014. +/effectwordnet: Sense-level lexicon acquisition for opinion inference. In EMNLP. Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In WSDM. Eduard C. Dragut, Hong Wang, Clement Yu, Prasad Sistla, and Weiyi Meng. 2012. Polarity consistency checking for sentiment dictionaries. In ACL. Weifu Du, Songbo Tan, Xueqi Cheng, and Xiaochun Yun. 2010. Adapting information bottleneck method for automatic construction of domainoriented sentiment lexicon. In WSDM. A. Esuli and F. Sebastiani. 2006. Determining term subjectivity and term orientation for opinion mining. In EACL. Ronen Feldman. 2013. Techniques and applications for sentiment analysis. Commun. ACM, 56(4). C. Fellbaum. 1998. WordNet: An On-Line Lexical Database and Some of its Applications. MIT Press, Cambridge, MA. Song Feng, Jun Sak Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In ACL. Peter G´acs and Laszlo Lov´asz. 1981. Khachiyans algorithm for linear programming. In Mathematical Programming at Oberwolfach, volume 14 of Mathematical Programming Studies. Springer Berlin Heidelberg. 1032 Michael R. Garey and David S. Johnson. 1990. Computers and Intractability; A Guide to the Theory of NP-Completeness. W. H. Freeman & Co. Laurent E. Ghaoui, Eric Feron, and Vendataramanan Balakrishnan. 1994. Linear Matrix Inequalities in System & Control Theory (Studies in Applied Mathematics), volume 15. SIAM. Miguel A. Goberna and Margarita M. L. Rodriguez. 2006. Analyzing linear systems containing strict inequalities via evenly convex hulls. European Journal of Operational Research, 169(3). Miguel A. Goberna, Valentin Jornet, and Margarita M.L. Rodriguez. 2003. On linear systems containing strict inequalities. Linear Algebra and its Applications, 360(0). Harvey J. Greenberg. 1993. How to analyze the results of linear programspart 3: Infeasibility diagnosis. Interfaces, 23(6). Ahmed Hassan and Dragomir Radev. 2010. Identifying text polarity using random walks. In ACL. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In ACL. Valentin Jijkoun, Maarten de Rijke, and Wouter Weerkamp. 2010. Generating focused topicspecific sentiment lexicons. In ACL. Nobuhiro Kaji and Masaru Kitsuregawa. 2007. Building lexicon for sentiment analysis from massive collection of html documents. In EMNLP-CoNLL. J. Kamps, M. Marx, R. Mokken, and M. de Rijke. 2004. Using wordnet to measure semantic orientation of adjectives. In LREC. Richard M. Karp. 2010. Reducibility among combinatorial problems. In 50 Years of Integer Programming 1958-2008 - From the Early Years to the State-ofthe-Art. Springer Berlin Heidelberg. L. G. Khachiyan. 1980. Polynomial algorithms in linear programming. Zh. Vychisl. Mat. Mat. Fiz., 20(1). Adam Kilgarriff. 2004. How dominant is the commonest sense of a word? In Text, Speech, and Dialogue, volume 3206 of Lecture Notes in Artificial Intelligence. M. Kim and E. Hovy. 2004. Determining the sentiment of opinions. In COLING. Soo-Min Kim and Eduard Hovy. 2006. Identifying and analyzing judgment opinions. In HLT-NAACL. Beata Beigman Klebanov, Nitin Madnani, and Jill Burstein. 2013. Using pivot-based paraphrasing and sentiment profiles to improve a subjectivity lexicon for essay data. In ACL. Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Yue Lu, Malu Castellanos, Umeshwar Dayal, and ChengXiang Zhai. 2011. Automatic construction of a context-aware sentiment lexicon: an optimization approach. In WWW. Andrew L. Maas, Raymond E. Daly, Peter Pham, Dan Huang, Andrew Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL. Saif Mohammad, Cody Dunne, and Bonnie Dorr. 2009. Generating high-coverage semantic orientation lexicons from overtly marked words and a thesaurus. In EMNLP. Wei Peng and Dae Hoon Park. 2011. Generate adjective sentiment dictionary for social media sentiment analysis using constrained nonnegative matrix factorization. In ICWSM. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In IJCAI. Mark Sanderson. 1999. The impact on retrieval effectiveness of skewed frequency distributions. ACM Transactions on Information Systems, 17(4). Thomas J. Schaefer. 1978. The complexity of satisfiability problems. In STOC. Alexander Schrijver. 1986. Theory of linear and integer programming. John Wiley & Sons, Inc., New York, NY, USA. P. Stone, D. Dunphy, M. Smith, and J. Ogilvie. 1966. The General Inquirer: A computer approach to content analysis. MIT Press. M. Taboada and J. Grieve. 2004. Analyzing appraisal automatically. In AAAI Spring Symposium. Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting semantic orientations of words using spin model. In ACL. M. Tamiz, S. J. Mardle, and D. F. Jones. 1996. Detecting IIS in infeasible linear programmes using techniques from goal programming. Comput. Oper. Res., 23(2). Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014. Building large-scale twitter-specific sentiment lexicon : A representation learning approach. In COLING. P. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In ACL. Gbolahan K. Williams and Sarabjot Singh Anand. 2009. Predicting the polarity strength of adjectives using wordnet. In ICWSM. 1033 T. Wilson, J. Wiebe, and P. Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In HLT/EMNLP. Yunfang Wu and Miaomiao Wen. 2010. Disambiguating dynamic sentiment ambiguous adjectives. In COLING. 1034
2015
99
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1–11, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing James Goodman∗Andreas Vlachos† Jason Naradowsky∗ ∗Computer Science Department, University College London [email protected], [email protected] † Department of Computer Science, University of Sheffield [email protected] Abstract Semantic parsers map natural language statements into meaning representations, and must abstract over syntactic phenomena, resolve anaphora, and identify word senses to eliminate ambiguous interpretations. Abstract meaning representation (AMR) is a recent example of one such semantic formalism which, similar to a dependency parse, utilizes a graph to represent relationships between concepts (Banarescu et al., 2013). As with dependency parsing, transition-based approaches are a common approach to this problem. However, when trained in the traditional manner these systems are susceptible to the accumulation of errors when they find undesirable states during greedy decoding. Imitation learning algorithms have been shown to help these systems recover from such errors. To effectively use these methods for AMR parsing we find it highly beneficial to introduce two novel extensions: noise reduction and targeted exploration. The former mitigates the noise in the feature representation, a result of the complexity of the task. The latter targets the exploration steps of imitation learning towards areas which are likely to provide the most information in the context of a large action-space. We achieve state-ofthe art results, and improve upon standard transition-based parsing by 4.7 F1 points. 1 Introduction Meaning representation languages and systems have been devised for specific domains, such as ATIS for air-travel bookings (Dahl et al., 1994) and database queries (Zelle and Mooney, 1996; bolster against defenses center will attacks The NATO cyber ’s prep dobj nsubj aux pobj nn det possessive poss bolster-01 defend-01 center military attack-01 name cyber NATO ARG1 ARG0 ARG1 prep-against name mod op1 Figure 1: Dependency (left) and AMR graph (right) for: “The center will bolster NATO’s defenses against cyber-attacks.’ Liang et al., 2013). Such machine-interpretable representations enable many applications relying on natural language understanding. The ambition of Abstract Meaning Representation (AMR) is that it is domain-independent and useful in a variety of applications (Banarescu et al., 2013). The first AMR parser by Flanigan et al. (2014) used graph-based inference to find a highestscoring maximum spanning connected acyclic graph. Later work by Wang et al. (2015b) was inspired by the similarity between the dependency parse of a sentence and its semantic AMR graph (Figure 1). Wang et al. (2015b) start from the dependency parse and learn a transition-based parser that converts it incrementally into an AMR graph using greedy decoding. An advantage of this approach is that the initial stage of dependency parsing is well-studied and trained using larger corpora than that for which AMR annotations exist. Greedy decoding, where the parser builds the parse while maintaining only the best hypothesis at each step, has a well-documented disadvantage: error propagation (McDonald and Nivre, 2007). When the parser encounters states during parsing that are unlike those found during training, it is more likely to make mistakes, leading to states which are increasingly more foreign and causing errors to accumulate. 1 One way to ameliorate this problem is to employ imitation learning algorithms for structured prediction. Algorithms such as SEARN (Daum´e III et al., 2009), DAGGER (Ross et al., 2011), and LOLS (Chang et al., 2015) address the problem of error propagation by iteratively adjusting the training data to increasingly expose the model to training instances it is likely to encounter during test. Such algorithms have been shown to improve performance in a variety of tasks including information extraction(Vlachos and Craven, 2011), dependency parsing (Goldberg and Nivre, 2013), and feature selection (He et al., 2013). In this work we build on the transition-based parsing approach of Wang et al. (2015b) and explore the applicability of different imitation algorithms to AMR parsing, which has a more complex output space than those considered previously. The complexity of AMR parsing affects transition-based methods that rely on features to represent structure, since these often cannot capture the information necessary to predict the correct transition according to the gold standard. In other words, the features defined are not sufficient to “explain” why different actions should preferred by the model. Such instances become noise during training, resulting in lower accuracy. To address this issue, we show that the α-bound Khardon and Wachman (2007), which drops consistently misclassified training instances, provides a simple and effective way of reducing noise and raising performance in perceptron-style classification training, and does so reliably across a range of parameter settings. This noise reduction is essential for imitation learning to gain traction in this task, and we gain 1.8 points of F1-Score using the DAGGER imitation learning algorithm. DAGGER relies on an externally specified expert (oracle) to define the correct action in each state; this defines a simple 0-1 loss function for each action. Other imitation learning algorithms (such as LOLS, SEARN) and the variant of DAGGER proposed by Vlachos and Clark (2014) (henceforth V-DAGGER) can leverage a task level loss function that does not decompose over the actions taken to construct the AMR graph. However these require extra computations to roll-out to an end-state AMR graph for each possible action not taken. The large action-space of our transition system makes these algorithms computationally infeasible, and roll-outs to an end-state for many of the possible actions will provide little additional information. Hence we modify the algorithms to target this exploration to actions where the classifier being trained is uncertain of the correct response, or disagrees with the expert. This provides a further gain of 2.7 F1 points. This paper extends imitation learning to structured prediction tasks more complex than previously attempted. In the process, we review and compare recently proposed algorithms and show how their components can be recombined and adjusted to construct a variant appropriate to the task in hand. Hence we invest some effort reviewing these algorithms and their common elements. Overall, we obtain a final F-Score of 0.70 on the newswire corpus of LDC2013E117 (Knight et al., 2014). This is identical to the score obtained by Wang et al. (2015a), the highest so far published. Our gain of 4.5 F1 points from imitation learning over standard transition-based parsing is orthogonal to that of Wang et al. (2015a) from additional trained analysers, including co-reference and semantic role labellers, incorporated in the feature set. We further test on five other corpora of AMR graphs, including weblog domains, and show a consistent improvement in all cases with the application of imitation learning using DAGGER and the targeted V-DAGGER we propose here. 2 Transition-based AMR parsing AMR parsing is an example of the wider family of structured prediction problems, in which we seek a mapping from an input x ∈X to a structured output y ∈Y. Here x is the dependency tree, and y the AMR graph; both are graphs and we notationally replace x with s1 and y with sT , with s1...T ∈S. si are the intermediate graph configurations (states) that the system transitions through. A transition-based parser starts with an input s1, and selects an action a1 ∈A, using a classifier. ai converts si into si+1, i.e. si+1 = ai(si). We term the set of states and actions ⟨s1, a1, . . . aT−1, sT ⟩ a trajectory of length T. The classifier ˆπ is trained to predict ai from si, with ˆπ(s) = arg maxa∈A wa· Φ(s), assuming a linear classifier and a feature function Φ(s). We require an expert, π∗, that can indicate what actions should be taken on each si to reach the target (gold) end state. In problems like POStagging these are directly inferable from gold, as the number of actions (T) equals the number of 2 Action Name Param. Pre-conditions Outcome of action NextEdge lr β non-empty Set label of edge (σ0, β0) to lr. Pop β0. NextNode lc β empty Set concept of node σ0 to lc. Pop σ0, and initialise β. Swap β non-empty Make β0 parent of σ0 (reverse edge) and its sub-graph. Pop β0 and insert β0 as σ1. ReplaceHead β non-empty Pop σ0 and delete it from the graph. Parents of σ0 become parents of β0. Other children of σ0 become children of β0. Insert β0 at the head of σ and re-initialise β. Reattach κ β non-empty Pop β0 and delete edge (σ0, β0). Attach β0 as a child of κ. If κ has already been popped from σ then re-insert it as σ1. DeleteNode β empty; leaf σ0 Pop σ0 and delete it from the graph. Insert lc Insert a new node δ with AMR concept lc as the parent of σ0, and insert δ into σ. InsertBelow Insert a new node δ with AMR concept lc as a child of σ0. Table 1: Action Space for the transition-based graph parsing algorithm Algorithm 1: Greedy transition-based parsing Data: policy π, start state s1 Result: terminal state sT 1 scurrent ←s1; 2 while scurrent not terminal do 3 anext ←π(scurrent) scurrent ←anext(scurrent) 4 sT ←scurrent tokens with a 1:1 correspondence between them. In dependency parsing and AMR parsing this is not straightforward and dedicated transition systems are devised. Given a labeled training dataset D, algorithm 1 is first used to generate a trajectory for each of the inputs (d ∈D) with π = π∗, the expert from which we wish to generalise. The data produced from all expert trajectories (i.e. ⟨si,d, ai,d⟩for all i ∈ 1 . . . T and all d ∈1 . . . D), are used to train the classifier ˆπ, the learned classifier, using standard supervised learning techniques. Algorithm 1 is reused to apply ˆπ to unseen data. Our transition system (defining A, S), and feature sets are based on Wang et al. (2015b), and are not the main focus of this paper. We introduce the key concepts here, with more details in the supplemental material. We initialise the state with the stack of the nodes in the dependency tree, root node at the bottom. This stack is termed σ. A second stack, β is initialised with all children of the top node in σ. The state at any time is described by σ, β, and the current graph (which starts as the dependency tree with one node per token). At any stage before termination some of the nodes will be labelled with words from the sentence, and others with AMR concepts. Each action manipulates the top nodes in each stack, σ0 and β0. We reach a terminal state when σ is empty. The objective function to maximise is the Smatch score (Cai and Knight, 2013), which calculates an F1-Score between the predicted and gold-target AMR graphs. Table 1 summarises the actions in A. NextNode and NextEdge form the core action set, labelling nodes and edges respectively without changing the graph structure. Swap, Reattach and ReplaceHead change graph structure, keeping it a tree. We permit a Reattach action to use parameter κ equal to any node within six edges from σ0, excluding any that would disconnect the graph or create a cycle. The Insert/InsertBelow actions insert a new node as a parent/child of σ0. These actions are not used in Wang et al. (2015b), but Insert is very similar to the Infer action of Wang et al. (2015a). We do not use the Reentrance action of Wang et al. (2015b), as we found it not to add any benefit. This means that the output AMR is always a tree. Our transition system has two characteristics which provide a particular challenge: given a sentence, the trajectory length T is theoretically unbounded; and |A| can be of the order 103 to 104. Commonly used transition-based systems have a fixed trajectory length T, which often arises naturally from the nature of the problem. In PoStagging each token requires a single action, and in syntactic parsing the total size of the graph is limited to the number of tokens in the input. The lack of a bound in T here is due to Insert actions that can grow the the graph, potentially ad infinitum, and actions like Reattach, which can move a sub-graph repeatedly back-and-forth. The action space size is due to the size of the AMR vocabulary, which for relations (edge-labels) is restricted to about 100 possible values, but for concepts (node-labels) is almost as broad as an En3 Algorithm 2: Generic Imitation Learning Data: data D, expert π∗, Loss function F(s) Result: learned classifier C, trained policy ˆπ 1 Initialise C0; for n = 1 to N do 2 Initialise En = φ; 3 πRollin = RollInPolicy(π∗, C0...n−1, n); 4 πRollout = RollOutPolicy(π∗, C0...n−1, n); 5 for d ∈D do 6 Predict trajectory ˆs1:T with πRollin; 7 for ˆst ∈ˆs1:T do 8 foreach aj t ∈Explore(ˆst, π∗, πRollin) do 9 Φj t = Φ(d, aj t, ˆs1:t); 10 Predict ˆs′ t+1:T with πRollout; 11 Lj t = F(ˆs′ T ); 12 foreach j do 13 ActionCostj t = Lj t −mink Lk t 14 Add (Φt, ActionCostt) to En; 15 ˆπn, Cn = Train(C1...n−1, E1 . . . En); glish dictionary. The large action space and unbounded T also make beam search difficult to apply since it relies on a fixed length T with commensurability of actions at the same index on different search trajectories. 3 Imitation Learning for Structured Prediction Imitation learning originated in robotics, training a robot to follow the actions of a human expert (Schaal, 1999; Silver et al., 2008). The robot moves from state to state via actions, generating a trajectory in the same manner as the transitionbased parser of Algorithm 1. In the imitation learning literature, the learning of a policy ˆπ from just the expert generated trajectories is termed “exact imitation”.As discussed, it is prone to error propagation, which arises because the implicit assumption of i.i.d. inputs (si) during training does not hold. The states in any trajectory are dependent on previous states, and on the policy used. A number of imitation learning algorithms have been proposed to mitigate error propagation, and share a common structure shown in Algorithm 2. Table 2 highlights some key differences between them. The general algorithm firstly applies a policy πRollIn (usually the expert, π∗, to start) to the data instances to generate a set of ‘RollIn’ trajectories in line 6 (we adopt the terminology of ‘RollIn’ and ‘RollOut’ trajectories from Chang et al. (2015)). Secondly a number of ‘what if’ scenarios are considered, in which a different action aj t is taken from a given st instead of the actual at in the RollIn trajectory (line 8). Each of these exploratory actions generates a RollOut trajectory (line 10) to a terminal state, for which a loss (L) is calculated using a loss function, F(sj T ), defined on the terminal states. For a number of different exploratory actions taken from a state st on a RollIn trajectory, the action cost (or relative loss) of each is calculated (line 13). Finally the generated ⟨st, aj t, ActionCostj t⟩data are used to train a classifier, using any cost-sensitive classification (CSC) method (line 15). New πRollIn and πRollOut are generated, and the process repeated over a number of iterations. In general the starting expert policy is progressively removed in each iteration, so that the training data moves closer and closer to the distribution encountered by just the trained classifier. This is required to reduce error propagation. For a general imitation learning algorithm we need to specify: • the policy to generate the RollIn trajectory (the RollInPolicy) • the policy to generate RollOut trajectories, including rules for interpolation of learned and expert policies (the RollOutPolicy) • which one-step deviations to explore with a RollOut (the Explore function) • how RollOut data are used in the classification learning algorithm to generate ˆπi. (within the Train function) Exact Imitation can be considered a single iteration of this algorithm, with πRollIn equal to the expert policy, and a 0-1 binary loss for F (0 loss for π∗(st), the expert action, and a loss of 1 for any other action); all one-step deviations from the expert trajectory are considered without explicit RollOut to a terminal state. In SEARN (Daum´e III et al., 2009), one of the first imitation learning algorithms in this framework, the πRollIn and πRollOut policies are identical within each iteration, and are a stochastic blend of the expert and all classifiers trained in previous iterations. The Explore function considers every possible one-step deviation from the RollIn trajectories, with a full RollOut to a terminal state. The 4 Algorithm ˆπ RollIn RollOut Explore Train Exact Imitation Deterministic Expert only None. 0/1 expert loss All 1-step E1 only SEARN Stochastic Mixture Mixture, step-level stochastic All 1-step En only LOLS Deterministic Learned only Mixture, trajectory-level stoch. All 1-step E1 . . . En SCB-LOLS Deterministic Learned only Mixture, trajectory-level stoch. Random E1 . . . En SMILE Stochastic Mixture None. 0/1 expert loss All 1-step E1 . . . En DAGGER Deterministic Mixture None. 0/1 expert loss All 1-step E1 . . . En V-DAGGER Deterministic Mixture Mixture, step-level stochastic All 1-step E1 . . . En AGGREVATE Deterministic Learned only Expert only Random E1 . . . En Table 2: Comparison of selected aspects of Imitation Learning algorithms. Train function uses only the training data from the most recent iteration (En) to train Cn. LOLS extends this work to provide a deterministic learned policy (Chang et al., 2015), with ˆπn = Cn. At each iteration ˆπn is trained on all previously gathered data E1...n; πRollIn uses the latest classifier ˆπn−1, and each RollOut uses the same policy for all actions in the trajectory; either π∗with probability β, or ˆπn−1 otherwise. Both LOLS and SEARN use an exhaustive search of alternative actions as an Explore function. Chang et al. (2015) consider Structured Contextual Bandits (SCB) as a partial information case, the SCB modification of LOLS permits only one cost function call per RollIn (received from the external environment), so exhaustive RollOut exploration at each step is not possible. SCB-LOLS Explore picks a single step t ∈{1 . . . T} at random at which to make a random single-step deviation. Another strand of work uses only the expert policy when calculating the action cost. Ross and Bagnell (2010) introduce SMILE, and later DAGGER (Ross et al., 2011). These do not RollOut as such, but as in exact imitation consider all one-step deviations from the RollIn trajectory and obtain a 0/1 action cost for each by asking the expert what it would do in that state. At the nth iteration the training trajectories are generated from an interpolation of π∗and ˆπn−1, with the latter progressively increasing in importance; π∗is used with probability (1-δ)n−1 for some decay rate δ. ˆπn is trained using all E1...n. Ross et al. (2011) discuss and reject calculating an action cost by completing a RollOut from each one-step deviation to a terminal state. Three reasons given are: 1. Lack of real-world applicability, for example in robotic control. 2. Lack of knowledge of the final loss function, if we just have the expert’s actions. 3. Time spent calculating RollOuts and calling the expert. Ross and Bagnell (2014) do incorporate RollOuts to calculate an action cost in their AGGREVATE algorithm. These RollOuts use the expert policy only, and allow a cost-sensitive classifier to be trained that can learn that some mistakes are more serious than others. As with DAGGER, the trained policy cannot become better than the expert. V-DAGGER is the variant proposed by Vlachos and Clark (2014) in a semantic parsing task. It is the same as DAGGER, but with RollOuts using the same policy as RollIn. For both V-DAGGER and SEARN, the stochasticity of the RollOut means that a number of independent samples are taken for each one-step deviation to reduce the variance of the action cost, and noise in the training data. This noise reduction comes at the expense of the time needed to compute additional RollOuts. 4 Adapting imitation learning to AMR Algorithms with full RollOuts have particular value in the absence of an optimal (or nearoptimal) expert able to pick the best action from any state. If we have a suitable loss function, then the benefit of RollOuts may become worth the computation expended on them. For AMR parsing we have both a loss function in Smatch, and the ability to generate arbitrary RollOuts. We therefore use a heuristic expert. This reduces the computational cost at the expense of not always predicting the best action. An expert needs an alignment between gold AMR nodes and tokens in the parse-tree or sentence to determine the actions to convert to one from the other. These alignments are not provided in the gold AMR, and our expert uses the AMR node to token alignments of JAMR (Flanigan et al., 2014). These alignments are not trained, but generated using regex and string matching rules. However, trajectories are in the range 50-200 actions for most training sentences, which combined with the size of |A| makes an exhaustive search of all one-step deviations expensive. Compare this to unlabeled shift5 reduce parsers with 4 actions, or POS tagging with |A| ∼30. 4.1 Targeted exploration To reduce this cost we note that exploring RollOuts for all possible alternative actions can be uninformative when the learned and expert policies agree on an action and none of the other actions score highly with the learned policy. Extending this insight we modify the Explore function in Algorithm 2 to only consider the expert action, plus all actions scored by the current learned policy that are within a threshold τ of the score for the best rated action. In the first iteration, when there is no current learned policy, we pick a number of actions (usually 10) at random for exploration. Both SCB-LOLS and AGGREVATE use partial exploration, but select the step t ∈1 . . . T, and the action at at random. Here we optimise computational resources by directing the search to areas for which the trained policy is least sure of the optimal action, or disagrees with the expert. Using imitation learning to address error propagation of transition-based parsing provides theoretical benefit from ensuring the distribution of st, at in the training data is consistent with the distribution on unseen test data. Using RollOuts that mix expert and learned policies additionally permits the learned policy to exceed the performance of a poor expert. Incorporating targeted exploration strategies in the Explore function makes this computationally feasible. 4.2 Noise Reduction Different samples for a RollOut trajectory using V-DAGGER or SEARN can give very different terminal states sT (the final AMR graph) from the same starting st and at due to the step-level stochasticity. The resultant high variance in the reward signal hinders effective learning. Daum´e III et al. (2009) have a similar problem, and note that an approximate cost function outperforms single Monte Carlo sampling, “likely due to the noise induced following a single sample”. To control noise we use the α-bound discussed by Khardon and Wachman (2007). This excludes a training example (i.e. an individual tuple si, ai) from future training once it has been misclassified α times in training. We find that this simple idea avoids the need for multiple RollOut samples. An attraction of LOLS is that it randomly selects either expert or learned policy for each RollOut, and then applies this consistently to the whole trajectory. Using LOLS should reduce noise without increasing the sample size. Unfortunately the unbounded T of our transition system leads to problems if we drop the expert from the RollIn or RollOut policy mix too quickly, with many trajectories never terminating. Ultimately ˆπ learns to stop doing this, but even with targeted exploration training time is prohibitive and our LOLS experiments failed to provide results. We find that VDAGGER with an α-bound works as a good compromise, keeping the expert involved in RollIn, and speeding up learning overall. Another approach we try is a form of focused costing (Vlachos and Craven, 2011). Instead of using the learned policy for β% of steps in the RollOut, we use it for the first b steps, and then revert to the expert. This has several potential advantages: the heuristic expert is faster than scoring all possible actions; it focuses the impact of the exploratory step on immediate actions/effects so that mistakes ˆπ makes on a distant part of the graph do not affect the action cost; it reduces noise for the same reason. We increase b in each iteration so that the expert is asymptotically removed from RollOuts, a function otherwise supported by the decay parameter, δ. 4.3 Transition System adaptations Applying imitation learning to a transition system with unbounded T can and does cause problems in early iterations, with RollIn or RollOut trajectories failing to complete while the learned policy, ˆπ, is still relatively poor. To ensure every trajectory completes we add action constraints to the system. These avoid the most pathological scenarios, such as disallowing a Reattach of a previously Reattached sub-graph. These constraints are only needed in the first few iterations until ˆπ learns, via the action costs, to avoid these scenarios. They are listed in the Supplemental Material. As a final failsafe we insert a hard-stop on any trajectory once T > 300. To address the size of |A|, we only consider a subset of AMR concepts when labelling a node. Wang et al. (2015b) use all concepts that occur in the training data in the same sentence as the lemma of the node, leading to hundreds or thousands of possible actions from some states. We use the smaller set of concepts that were assigned by the expert to the lemma of the current node any6 Exact Imitation Imitation Learning Experiment No α α=1 α-Gain No α α=1 IL Gain (α) IL Gain (No α) Total Gain AROW, C=10 65.5 66.8 1.3 65.5 67.4 0.6 0.0 1.9 AROW, C=100 66.4 66.6 0.2 66.4 67.7 1.1 0.0 1.3 AROW, C=1000 66.4 67.0 0.6 66.5 68.2 1.2 0.1 1.8 PA, C=100 66.7 66.5 -0.2 67.2 68.7 2.2 0.5 2.0 Perceptron 65.5 65.3 -0.2 66.6 68.6 3.3 1.1 3.1 Table 3: DAGGER with α-bound. All figures are F-Scores on the validation set. 5 iterations of classifier training take place after each DAgger iteration. A decay rate (δ) for π∗of 0.3 was used. where in the training data. We obtain these assignments from an initial application of the expert to the full training data. We add actions to use the actual word or lemma of the current node to increase generalisation, plus an action to append ‘-01’ to ‘verbify’ an unseen word. This is similar to the work of Werling et al. (2015) in word to AMR concept mapping, and is useful since 38% of the test AMR concepts do not exist in the training data (Flanigan et al., 2014). Full details of the heuristics of the expert policy, features used and pre-processing are in Supplemental Material. All code is available at https://github.com/hopshackle/ dagger-AMR. 4.4 Na¨ıve Smatch as Loss Function Smatch (Cai and Knight, 2013) uses heuristics to control the combinatorial explosion of possible mappings between the input and output graphs, but is still too computationally expensive to be calculated for every RollOut during training. We retain Smatch for reporting all final results, but use ‘Na¨ıve Smatch’ as an approximation during training. This skips the combinatorial mapping of nodes between predicted and target AMR graphs. Instead, for each graph we compile a list of: • Node labels, e.g. name • Node-Edge-Node label concatenations, e.g. leave-01:ARG0:room • Node-Edge label concatenations, e.g. leave-01:ARG0, ARG0:room The loss is the number of entries that appear in only one of the lists. We do not convert to an F1 score, as retaining the absolute number of mistakes is proportional to the size of the graph. The flexibility of the transition system means multiple different actions from a given state si can lead, via different RollOut trajectories, to the same target sT . This can result in many actions having the best action cost, reducing the signal in the training data and giving poor learning. To encourage short trajectories we break these ties with a penalty of T/5 to Na¨ıve Smatch. Multiple routes of the same length still exist, and are preferred equally. Note that the ordering of the stack of dependency tree nodes in the transition system means we start at leaf nodes and move up the tree. This prevents sub-components of the output AMR graph being produced in an arbitrary order. 5 Experiments The main dataset used is the newswire (proxy) section of LDC2014T12 (Knight et al., 2014). The data from years 1995-2006 form the training data, with 2007 as the validation set and 2008 as the test set. The data split is the same as that used by Flanigan et al. (2014) and Wang et al. (2015b). 1 We first assess the impact of noise reduction using the alpha bound, and report these experiments without Rollouts (i.e. using DAGGER) to isolate the effect of noise reduction. Table 3 summarises results using exact imitation and DAGGER with the α-bound set to discard a training instance after one misclassification. This is the most extreme setting, and the one that gave best results. We try AROW (Crammer et al., 2013), PassiveAggressive (PA) (Crammer et al., 2006), and perceptron (Collins, 2002) classifiers, with averaging in all cases. We see a benefit from the α-bound for exact imitation only with AROW, which is more noise-sensitive than PA or the simple perceptron. With DAGGER there is a benefit for all classifiers. In all cases the α-bound and DAGGER are synergistic; without the α-bound imitation learning works less well, if at all. α=1 was the optimal setting, with lesser benefit observed for larger values. We now turn our attention to targeted exploration and focused costing, for which we use VDAGGER as explained in section 4. For all V1Formally Flanigan et al. (2014; Wang et al. (2015b) use the pre-release version of this dataset (LDC2013E117). Werling et al. (2015) conducted comparative tests on the two versions, and found only a very minor changes of 0.1 to 0.2 points of F-score when using the final release. 7 Authors Algorithmic Approach R P F Flanigan et al. (2014) Concept identification with semi-markov model followed by optimisation of constrained graph that contains all of these. 0.52 0.66 0.58 Werling et al. (2015) As Flanigan et al. (2014), with enhanced concept identification 0.59 0.66 0.62 Wang et al. (2015b) Single stage using transition-based parsing algorithm 0.62 0.64 0.63 Pust et al. (2015) Single stage System-Based Machine Translation 0.66 Peng et al. (2015) Hyperedge replacement grammar 0.57 0.59 0.58 Artzi et al. (2015) Combinatory Categorial Grammar induction 0.66 0.67 0.66 Wang et al. (2015a) Extensions to action space and features in Wang et al. (2015b) 0.69 0.71 0.70 This work Imitation Learning with transition-based parsing 0.68 0.73 0.70 Table 4: Comparison of previous work on the AMR task. R, P and F are Recall, Precision and F-Score. DAGGER experiments we use AROW with regularisation parameter C=1000, and δ=0.3. Figure 2 shows results by iteration of reducing the number of RollOuts explored. Only the expert action, plus actions that score close to the bestscoring action (defined by the threshold) are used for RollOuts. Using the action cost information from RollOuts does surpass simple DAGGER, and unsurprisingly more exploration is better. Figure 3 shows the same data, but by total computational time spent2. This adjusts the picture, as small amounts of exploration give a faster benefit, albeit not always reaching the same peak performance. As a baseline, three iterations of VDAGGER without targeted exploration (threshold = ∞) takes 9600 minutes on the same hardware to give an F-Score of 0.652 on the validation set. Figure 4 shows the improvement using focused costing. The ‘n/m’ setting sets b, the number of initial actions taken by ˆπ in a RollOut to n, and then increases this by m at each iteration. We gain an increase of 2.9 points from 0.682 to 0.711. In all the settings tried, focused costing improves the results, and requires progressive removal of the expert to achieve the best score. We use the classifier from the Focused Costing 5/5 run to achieve an F-Score on the held-out test set of 0.70, equal to the best published result so far (Wang et al., 2015a). Our gain of 4.7 points from imitation learning over standard transition-based parsing is orthogonal to that of Wang et al. (2015a) using exact imitation with additional trained analysers; they experience a gain of 2 points from using a Charniak parser (Charniak and Johnson, 2005) trained on the full OntoNotes corpus instead of the Stanford parser used here and in Wang et al. (2015b), and a further gain of 2 points from a semantic role labeller. Table 4 lists previous AMR work on the same dataset. 2experiments were run on 8-core Google Cloud n1highmem-8 machines. Validation F-Score Test F-Score Dataset EI D V-D V-D Rao et al proxy 0.670 0.686 0.704 0.70 0.61 dfa 0.495 0.532 0.546 0.50 0.44 bolt 0.456 0.468 0.524 0.52 0.46 xinhua 0.598 0.623 0.683 0.62 0.52 lpp 0.540 0.546 0.564 0.55 0.52 Table 5: Comparison of Exact Imitation (EI), DAGGER (D), V-DAGGER (V-D) on all components of the LDC2014T12 corpus. Using DAGGER with this system we obtained an F-Score of 0.60 in the Semeval 2016 task on AMR parsing, one standard deviation above the mean of all entries. (Goodman et al., 2016) Finally we test on all components of the LDC2014T12 corpus as shown in Table 5, which include both newswire and weblog data, as well as the freely available AMRs for The Little Prince, (lpp)3. For each we use exact imitation, DAGGER, and V-DAGGER on the train/validation/splits specified in the corpus. In all cases, imitation learning without RollOuts (DAGGER) improves on exact imitation, and incorporating RollOuts (VDAGGER) provides an additional benefit. Rao et al. (2015) use SEARN on the same datasets, but with a very different transition system. We show their results for comparison. Our expert achieves a Smatch F-Score of 0.94 on the training data. This explains why DAGGER, which assumes a good expert, is effective. Introducing RollOuts provides additional theoretical benefits from a non-decomposable loss function that can take into account longer-term impacts of an action. This provides much more information than the 0/1 binary action cost in DAGGER, and we can use Na¨ıve Smatch as an approximation to our actual objective function during training. This informational benefit comes at the cost of increased noise and computational expense, which we control with targeted exploration and focused 3http://amr.isi.edu/download.html 8 Figure 2: Targeted exploration with VDAGGER by iteration. Figure 3: Targeted exploration with VDAGGER by time. Figure 4: Focused costing with VDAGGER. All runs use threshold of 0.10. costing. We gain 2.7 points in F-Score, at the cost of 80-100x more computation. In problems with a less good expert, the gain from exploration could be much greater. Similarly, if designing an expert for a task is time-consuming, then it may be a better investment to rely on exploration with a poor expert to achieve the same result. 6 Related Work Other strategies have been used to mitigate the error propagation problem in transition-based parsing. A common approach is to use beam search through state-space for each action choice to find a better approximation of the long-term score of the action, e.g. Zhang and Clark (2008). Goldberg and Elhadad (2010) remove the determinism of the sequence of actions to create easy-first parsers, which postpone uncertain, error-prone decisions until more information is available. This contrasts with working inflexibly left-to-right along a sentence, or bottom-to-top up a tree. Goldberg and Nivre (2012) introduce dynamic experts that are complete in that they will respond from any state, not just those on the perfect trajectory assuming no earlier mistakes; any expert used with an imitation learning algorithm needs to be complete in this sense. Their algorithm takes exploratory steps off the expert trajectory to augment the training data collected in a fashion very similar to DAGGER. Honnibal et al. (2013) use a non-monotonic parser that allows actions that are inconsistent with previous actions. When such an action is taken it amends the results of previous actions to ensure post-hoc consistency. Our parser is nonmonotonic, and we have the same problem encountered by Honnibal et al. (2013) with many different actions from a state si able to reach the target sT , following different “paths up the mountain”. This leads to poor learning. To resolve this with fixed T they break ties with a monotonic parser, so that actions that do not require later correction are scored higher in the training data. In our variable T environment, adding a penalty to the size of T is sufficient (section 4.4). Vlachos and Clark (2014) use V-DAGGER to give a benefit of 4.8 points of F-Score in a domain-specific semantic parsing problem similar to AMR. Their expert is sub-optimal, with no information on alignment between words in the input sentence, and nodes in the target graph. The parser learns to link words in the input to one of the 35 node types, with the ‘expert’ policy aligning completely at random. This is infeasible with AMR parsing due to the much larger vocabulary. 7 Conclusions Imitation learning provides a total benefit of 4.5 points with our AMR transition-based parser over exact imitation. This is a more complex task than many previous applications of imitation learning, and we found that noise reduction was an essential pre-requisite. Using a simple 0/1 binary action cost using a heuristic expert provided a benefit of 1.8, with the remaining 2.7 points coming from RollOuts with targeted exploration, focused costing and a non-decomposable loss function that was a better approximation to our objective. We have considered imitation learning algorithms as a toolbox that can be tailored to fit the characteristics of the task. An unbounded T meant that the LOLS RollIn was not ideal, but this could be modified to slow the loss of influence of the expert policy. We anticipate the approaches that we have found useful in the case of AMR to reduce the impact of noise, efficiently support large action spaces with targeted exploration, and cope with unbounded trajectories in the transition system will be of relevance to other structured prediction tasks. 9 Acknowledgments Andreas Vlachos is supported by the EPSRC grant Diligent (EP/M005429/1) and Jason Naradowsky by a Google Focused Research award. We would also like to thank our anonymous reviewers for many comments that helped improve this paper. References Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710, Lisbon, Portugal, September. Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria, August. Association for Computational Linguistics. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In ACL (2), pages 748–752. Kai-wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daum´e III, and John Langford. 2015. Learning to search better than your teacher. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2058–2066. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173–180. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. The Journal of Machine Learning Research, 7:551–585. Koby Crammer, Alex Kulesza, and Mark Dredze. 2013. Adaptive regularization of weight vectors. Mach Learn, 91:155–187. Deborah A Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43–48. Association for Computational Linguistics. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning, 75(3):297–325. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1426–1436. Association for Computational Linguistics. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 742–750. Association for Computational Linguistics. Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In COLING, pages 959–976. Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the association for Computational Linguistics, 1:403–414. James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016. Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alphabound. In Proceedings of the 10th International Workshop on Semantic Evaluation. He He, Hal Daum´e III, and Jason Eisner. 2013. Dynamic feature selection for dependency parsing. In Empirical Methods in Natural Language Processing. Matthew Honnibal, Yoav Goldberg, and Mark Johnson. 2013. A non-monotonic arc-eager transition system for dependency parsing. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 163–172. Citeseer. Roni Khardon and Gabriel Wachman. 2007. Noise tolerant variants of the perceptron algorithm. The journal of machine learning research, 8:227–248. Kevin Knight, Laura Baranescu, Claire Bonial, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, and Nathan Schneider. 2014. Abstract meaning representation (amr) annotation release 1.0. Linguistic Data Consortium Catalog. LDC2014T12. Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389–446. 10 Ryan T McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In EMNLP-CoNLL, pages 122–131. Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement grammar based approach for amr parsing. CoNLL 2015, page 32. Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Using syntaxbased machine translation to parse english into abstract meaning representation. arXiv preprint arXiv:1504.06665. Sudha Rao, Yogarshi Vyas, Hal Daume III, and Philip Resnik. 2015. Parser for abstract meaning representation using learning to search. arXiv preprint arXiv:1510.07586. St´ephane Ross and Drew Bagnell. 2010. Efficient reductions for imitation learning. In 13th International Conference on Artificial Intelligence and Statistics, pages 661–668. Stephane Ross and J Andrew Bagnell. 2014. Reinforcement and imitation learning via interactive noregret learning. arXiv preprint arXiv:1406.5979. St´ephane Ross, Geoffrey J Gordon, and J Andrew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In 14th International Conference on Artificial Intelligence and Statistics, volume 15, pages 627–635. Stefan Schaal. 1999. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233–242. David Silver, James Bagnell, and Anthony Stentz. 2008. High performance outdoor navigation from overhead data using imitation learning. Robotics: Science and Systems IV, Zurich, Switzerland. Andreas Vlachos and Stephen Clark. 2014. A new corpus and imitation learning framework for contextdependent semantic parsing. Transactions of the Association for Computational Linguistics, 2:547–559. Andreas Vlachos and Mark Craven. 2011. Searchbased structured prediction applied to biomedical event extraction. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 49–57. Association for Computational Linguistics. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 857–862, Beijing, China, July. Association for Computational Linguistics. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for amr parsing. North American Association for Computational Linguistics, Denver, Colorado. Keenon Werling, Gabor Angeli, and Christopher D. Manning. 2015. Robust subgraph generation improves abstract meaning representation parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 982–991, Beijing, China, July. Association for Computational Linguistics. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the National Conference on Artificial Intelligence, pages 1050–1055. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 562–571. Association for Computational Linguistics. 11
2016
1
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 97–107, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Graph-Based Translation Via Graph Segmentation Liangyou Li and Andy Way and Qun Liu ADAPT Centre, School of Computing Dublin City University, Ireland {liangyouli,away,qliu}@computing.dcu.ie Abstract One major drawback of phrase-based translation is that it segments an input sentence into continuous phrases. To support linguistically informed source discontinuity, in this paper we construct graphs which combine bigram and dependency relations and propose a graph-based translation model. The model segments an input graph into connected subgraphs, each of which may cover a discontinuous phrase. We use beam search to combine translations of each subgraph left-to-right to produce a complete translation. Experiments on Chinese–English and German– English tasks show that our system is significantly better than the phrase-based model by up to +1.5/+0.5 BLEU scores. By explicitly modeling the graph segmentation, our system obtains further improvement, especially on German–English. 1 Introduction Statistical machine translation (SMT) starts from sequence-based models. The well-known phrasebased (PB) translation model (Koehn et al., 2003) has significantly advanced the progress of SMT by extending translation units from single words to phrases. By using phrases, PB models can capture local phenomena, such as word order, word deletion, and word insertion. However, one of the significant weaknesses in conventional PB models is that only continuous phrases are used, so generalizations such as French ne . . . pas to English not cannot be learned. To solve this, syntax-based models (Galley et al., 2004; Chiang, 2005; Liu et al., 2006; Marcu et al., 2006) take tree structures into consideration to learn translation patterns by using non-terminals for generalization. Model C D S (Koehn et al., 2003) • sequence (Galley and Manning, 2010) • • sequence (Quirk et al., 2005) and • tree (Menezes and Quirk, 2005) This work • • graph Table 1: Comparison between our work and previous work in terms of three aspects: keeping continuous phrases (C), allowing discontinuous phrases (D), and input structures (S). However, the expressiveness of these models is confined by hierarchical constraints of the grammars used (Galley and Manning, 2010) since these patterns still cover continuous spans of an input sentence. By contrast, Quirk et al. (2005), Menezes and Quirk (2005) and Xiong et al. (2007) take treelets from dependency trees as the basic translation units. These treelets are connected and may cover discontinuous phrases. However, their models lack the ability to handle continuous phrases which are not connected in trees but could in fact be extremely important to system performance (Koehn et al., 2003). Galley and Manning (2010) directly extract discontinuous phrases from input sequences. However, without imposing additional restrictions on discontinuity, the amount of extracted rules can be very large and unreliable. Different from previous work (as shown in Table 1), in this paper we use graphs as input structures and propose a graph-based translation model to translate a graph into a target string. The basic translation unit in this model is a connected subgraph which may cover discontinuous phrases. The main contributions of this work are summarized as follows: • We propose to use a graph structure to combine a sequence and a tree (Section 3.1). The 97 graph contains both local relations between words from the sequence and long-distance relations from the tree. • We present a translation model to translate a graph (Section 3). The model segments the graph into subgraphs and uses beam search to generate a complete translation from left to right by combining translation options of each subgraph. • We present a set of sparse features to explicitly model the graph segmentation (Section 4). These features are based on edges in the input graph, each of which is either inside a subgraph or connects the subgraph with a previous subgraph. • Experiments (Section 5) on Chinese–English and German–English tasks show that our model is significantly better than the PB model. After incorporating the segmentation model, our system achieves still further improvement. 2 Review: Phrase-based Translation We first review the basic PB translation approach, which will be extended to our graph-based translation model. Given a pair of sentences ⟨S, T⟩, the conventional PB model is defined as Equation (1): p(tI 1 | sI 1) = IY i=1 p(ti|sai)d(sai, sai−1) (1) The target sentence T is broken into I phrases t1 · · · tI, each of which is a translation of a source phrase sai. d is a distance-based reordering model. Note that in the basic PB model, the phrase segmentation is not explicitly modeled which means that different segmentations are treated equally (Koehn, 2010). The performance of PB translation relies on the quality of phrase pairs in a translation table. Conventionally, a phrase pair ⟨s, t⟩has two properties: (i) s and t are continuous phrases. (ii) ⟨s, t⟩ is consistent with a word alignment A (Och and Ney, 2004): ∀(i, j) ∈A, si ∈s ⇔tj ∈t and ∃si ∈s, tj ∈t, (i, j) ∈A. PB decoders generate hypotheses (partial translations) from left to right. Each hypothesis maintains a coverage vector to indicate which source words have been translated so far. A hypothesis can be extended on the right by translating an 0 1 • • • 2 • • •• •• 3 ••• ••• ••• 4 •••• •••• •••• Figure 1: Beam search for phrase-based MT. • denotes a covered source position while indicates an uncovered position (Liu and Huang, 2014). uncovered source phrase. The translation process ends when all source words have been translated. Beam search (as in Figure 1) is taken as an approximate search strategy to reduce the size of the decoding space. Hypotheses which cover the same number of source words are grouped in a stack. Hypotheses can be pruned according to their partial translation cost and an estimated future cost. 3 Graph-Based Translation Our graph-based translation model extends PB translation by translating an input graph rather than a sequence to a target string. The graph is segmented into a sequence of connected subgraphs, each of which corresponds to a target phrase, as in Equation (2): (2) p(tI 1 | G(˜sI 1)) = IY i=1 p(ti|G(˜sai))d(G(˜sai), G(˜sai−1)) ≈ IY i=1 p(ti|G(˜sai))d(˜sai, ˜sai−1) where G(˜si) denotes a connected source subgraph which covers a (discontinuous) phrase ˜si. 3.1 Building Graphs As a more powerful and natural structure for sentence modeling, a graph can model various kinds of word-relations together in a unified representation. In this paper, we use graphs to combine two commonly used relations: bigram relations and dependency relations. Figure 2 shows an example of a graph. Each edge in the graph denotes either a dependency relation or a bigram relation. Note that the graph we use in this paper is directed, connected, node-labeled and may contain cycles. Bigram relations are implied in sequences and provide local and sequential information on pairs 98 2010 2010Nian FIFA FIFA World Cup Shijiebei in Zai South Africa Nanfei successfully Chenggong held Juxing Figure 2: An example graph for a Chinese sentence. Each node includes a Chinese word and its English meaning. Dashed red lines are bigram relations. Solid lines are dependency relations. Dotted blue lines are shared by bigram and dependency relations. of continuous words. Phrases connected by bigram relations (i.e. continuous phrases) are known to be useful to improve phrase coverage (Hanneman and Lavie, 2009). By contrast, dependency relations come from dependency structures which model syntactic and semantic relations between words. Phrases whose words are connected by dependency relations (also known as treelets) are linguistic-motivated and thus more reliable (Quirk et al., 2005). By combining these two relations together in graphs, we can make use of both continuous and linguistic-informed discontinuous phrases as long as they are connected subgraphs. 3.2 Training Different from PB translation, the basic translation units in our model are subgraphs. Thus, during training, we extract subgraph–phrase pairs instead of phrase pairs on parallel graph–string sentences associated with word alignments.1 An example of a translation rule is as follows: FIFA Shijiebei Juxing FIFA World Cup was held Note that the source side of a rule in our model is a graph which can be used to cover either a continuous phrase or a discontinuous phrase according to its match in an input graph during decoding. The algorithm for extracting translation rules is shown in Algorithm 1. This algorithm traverses each phrase pair ⟨˜s, t⟩, which is within a length limit and consistent with a given word alignment 1Different from translation rules in conventional syntaxbased MT, rules in our model are not learned based on synchronous grammars and so non-terminals are disallowed. Algorithm 1: Algorithm for extracting translation rules from a graph-string pair. Data: A word-aligned graph–string pair (G(S), T, A) Result: A set of translation pairs R 1 for each phrase t in T: | t |≤L do 2 find the minimal (may be discontinuous) phrase ˜s in S so that | ˜s |≤L and ⟨˜s, t⟩is consistent with A ; 3 Queue Q = {˜s}; 4 while Q is not empty do 5 pop an element ˜s off; 6 if G(˜s) is connected then 7 add ⟨G(˜s), t⟩to R; 8 end 9 if | ˜s |< L then 10 for each unaligned word si adjacent to ˜s do 11 ˜s′ = extend ˜s with si; 12 add ˜s′ to Q; 13 end 14 end 15 end 16 end (lines 1–2), and outputs ⟨G(˜s), t⟩if ˜s is covered by a connected subgraph G(˜s) (lines 6–8). A source phrase can be extended with unaligned source words which are adjacent to the phrase (lines 9– 14). We use a queue Q to store all phrases which are consistently aligned to the same target phrase (line 3). 3.3 Model and Decoding We define our model in the log-linear framework (Och and Ney, 2002) over a derivation D = r1r2 · · · rN, as in Equation (3): p(D) ∝ Y i φi(D)λi (3) where ri are translation rules, φi are features defined on derivations and λi are feature weights. In our experiments, we use the standard 9 features: two translation probabilities p(G(s)|t) and p(t|G(s)), two lexical translation probabilities plex(s|t) and plex(t|s), a language model lm(t) over a translation t, a rule penalty, a word penalty, an unknown word penalty and a distortion feature d for distance-based reordering. The calculation of the distortion feature d in our 99 S s2 s1 s2 s3 1 2 3 4 5 6 7 T d1=2 d2=5 d2+=3 d3=0 Figure 3: Distortion calculation for both continuous and discontinuous phrases in a derivation. . model is different from the one used in conventional PB models, as we need to take discontinuity into consideration. In this paper, we use a distortion function defined in Galley and Manning (2010) to penalize discontinuous phrases that have relatively long gaps. Figure 3 shows an example of calculating distortion for discontinuous phrases. Our graph-based decoder is very similar to the PB decoder except that, in our decoder, each hypothesis is extended by translating an uncovered subgraph instead of a phrase. Positions covered by the subgraph are then marked as translated. 4 Graph Segmentation Model Each derivation in our graph-based translation model implies a sequence of subgraphs (also called a segmentation). By default, similar to PB translation, our model treats each segmentation equally as shown in Equation (2). However, previous work on PB translation has suggested that such segmentations provide useful information which can improve translation performance. For example, boundary information in a phrase segmentation can be used for reordering models (Xiong et al., 2006; Cherry, 2013). In this paper, we are interested in directly modeling the segmentation using information from graphs. By making the assumption that each subgraph is only dependent on previous subgraphs, we define a generative process over a graph segmentation as in Equation (4): (4) p(G(˜s1) · · · G(˜sI)) = IY i=1 P(G(˜si)|G(˜s1) · · · G(˜si−1)) Instead of training a stand-alone discriminative segmentation model to assign each subgraph a probability given previous subgraphs, we implement the model via sparse features, each of which is extracted at run-time during decoding and then ZH–EN #Sents Train 1.5M+ MT02 (Dev) 878 MT04 1,597 MT05 1,082 DE–EN #Sents Train 2M+ WMT11 (Dev) 3,003 WMT12 3,003 WMT13 3,000 Table 2: The number of sentences in our corpora. directly added to the log-linear framework, so that these features can be tuned jointly with other features (of Section 3.3) to directly maximize the translation quality. Since a segmentation is obtained by breaking up the connectivity of an input graph, it is intuitive to use edges to model the segmentation. According to Equation (4), for a current subgraph Gi, we only consider those edges which are either inside Gi or connect Gi with a previous subgraph. Based on these edges, we extract sparse features for each node in the subgraph. The set of sparse features is defined as follows: n.w n.c  × n′.w n′.c  ×    C P H   ×  in out  where n.w and n.c are the word and class of the current node n, and n′.w and n′.c are the word and class of a node n′ connected to n. C, P, and H denote that the node n′ is in the current subgraph Gi or the adjacent previous subgraph Gi−1 or other previous subgraphs, respectively. Note that we treat the adjacent previous subgraph differently from others since information from the last previous unit is quite useful (Xiong et al., 2006; Cherry, 2013). in and out denote that the edge is an incoming edge or outgoing edge for the current node n. Figure 4 shows an example of extracting sparse features for a subgraph. Inspired by success in using sparse features in SMT (Cherry, 2013), in this paper we lexicalize only on the top-100 most frequent words. In addition, we group source words into 50 classes by using mkcls which should provide useful generalization (Cherry, 2013) for our model. 5 Experiment We conduct experiments on Chinese–English (ZH–EN) and German–English (DE–EN) translation tasks. Table 2 provides a summary of our corpra. Our ZH–EN training corpus contains 1.5M+ sentences from LDC. NIST 2002 (MT02) is taken as a development set to tune weights, and NIST 100 2010 2010Nian FIFA FIFA World Cup Shijiebei in Zai South Africa Nanfei successfully Chenggong held Juxing 2010 FIFA World Cup was held successfully in South Africa r1 r2 r3 Sparse features for r3: W:Zai W:Nanfei C in W:Zai W:Nanfei C out W:Zai W:Shijiebei P out W:Zai W:Juxing P in W:Nanfei W:Zai C in W:Nanfei W:Zai C out W:Nanfei W:Chenggong C in W:Chenggong W:Nanfei C out W:Chenggong W:Juxing P in C:4 W:Nanfei C in C:4 W:Nanfei C out C:4 W:Shijiebei P out C:4 W:Juxing P in C:5 W:Zai C in C:5 W:Zai C out C:5 W:Chenggong C in C:6 W:Nanfei C out C:6 W:Juxing P in W:Zai C:5 C in W:Zai C:5 C out W:Zai C:3 P out W:Zai C:7 P in W:Nanfei C:4 C in W:Nanfei C:4 C out W:Nanfei C:6 C in W:Chenggong C:5 C out W:Chenggong C:7 P in C:4 C:5 C in C:4 C:5 C out C:4 C:3 P out C:4 C:7 P in C:5 C:4 C in C:5 C:4 C out C:5 C:6 C in C:6 C:5 C out C:6 C:7 P in Figure 4: An illustration of extracting sparse features for each node in a subgraph during decoding. The decoder segments the graph in Figure 2 into three subgraphs (solid rectangles) and produces a complete translation by combining translations of each subgraph (dashed rectangles). In this figure, the class of a word is randomly assigned. 2004 (MT04) and NIST 2005 (MT05) are two test sets used to evaluate the systems. The Stanford Chinese word segmenter (Chang et al., 2008) is used to segment Chinese sentences. The Stanford dependency parser (Chang et al., 2009) parses a Chinese sentence into a projective dependency tree which is then converted to a graph by adding bigram relations. The DE–EN training corpus is from WMT 2014, including Europarl V7 and News Commentary. News-Test 2011 (WMT11) is taken as a development set while News-Test 2012 (WMT12) and News-Test 2013 (WMT13) are test sets. We use mate-tools2 to perform morphological analysis and parse German sentences (Bohnet, 2010). Then, MaltParser3 converts a parse result into a projective dependency tree (Nivre and Nilsson, 2005). 5.1 Settings In this paper, we mainly report results from five systems under the same configuration. PBMT is built by the PB model in Moses (Koehn et al., 2http://code.google.com/p/mate-tools/ 3http://www.maltparser.org/ 2007). Treelet extends PBMT by taking treelets as the basic translation units (Quirk et al., 2005; Menezes and Quirk, 2005). We implement a Treelet model in Moses which produces translations from left to right and uses beam search for decoding. DTU extends the PB model by allowing discontinuous phrases (Galley and Manning, 2010). We implement DTU with source discontinuity in Moses.4 GBMT is our basic graph-based translation system while GSM adds the graph segmentation model into GBMT. Both systems are implemented in Moses. Word alignment is performed by GIZA++ (Och and Ney, 2003) with the heuristic function growdiag-final-and. We use SRILM (Stolcke, 2002) to train a 5-gram language model on the Xinhua portion of the English Gigaword corpus 5th edition with modified Kneser-Ney discounting (Chen and Goodman, 1996). Batch MIRA (Cherry and Foster, 2012) is used to tune weights. BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2011), and TER (Snover et al., 2006) are used for evaluation. 4The re-implementation of DTU in Moses makes it easier to meaningfully compare systems under the same settings. 101 Metric System ZH–EN DE–EN MT04 MT05 WMT12 WMT13 BLEU ↑ PBMT 33.2 31.8 19.5 21.9 Treelet 33.8∗ 31.7 19.6 22.1∗ DTU 34.5∗ 32.3∗ 19.8∗ 22.3∗ GBMT 34.7∗ 32.4∗ 19.8∗ 22.4∗ GSM 34.9∗+ 32.7∗+ 20.3∗+ 22.9∗+ METEOR ↑ PBMT 32.1 32.3 28.0 29.2 Treelet 31.9 31.8 28.0 29.1 DTU 32.3∗ 32.4 28.2∗ 29.5∗ GBMT 32.4∗+ 32.5∗ 28.2∗ 29.4∗ GSM 32.7∗+ 32.6∗+ 28.5∗+ 29.8∗+ TER ↓ PBMT 60.6 61.6 63.7 60.2 Treelet 60.1∗ 61.4 63.2∗ 59.6∗ DTU 60.0∗ 61.5 63.5∗ 59.8∗ GBMT 59.8∗+ 61.3∗ 63.5∗ 59.8∗ GSM 60.5 62.1 63.1∗+ 59.3∗+ Table 3: Metric scores for all systems on Chinese–English (ZH–EN) and German–English (DE–EN). Each score is an average over three MIRA runs (Clark et al., 2011). ∗means a system is significantly better than PBMT at p ≤0.01. Bold figures mean a system is significantly better than Treelet at p ≤0.01. + means a system is significantly better than DTU at p ≤0.01. In this table, we mark a system by comparing it with previous ones. 5.2 Results and Discussion Table 3 shows our evaluation results. We find that our GBMT system is significantly better than PBMT as measured by all three metrics across all test sets. Specifically, the improvements are up to +1.5/+0.5 BLEU, +0.3/+0.2 METEOR, and -0.8/0.4 TER on ZH–EN and DE–EN, respectively. This improvement is reasonable as our system allows discontinuous phrases which can reduce data sparsity and handle long-distance relations (Galley and Manning, 2010). Another argument for discontinuous phrases is that they allow the decoder to use larger translation units which tend to produce better translations (Galley and Manning, 2010). However, this argument was only verified on ZH–EN. Therefore, we are interested in seeing whether we have the same observation in our experiments on both language pairs. We count the used translation rules in MT02 and WMT11 based on different target lengths. The results are shown in Figure 5. We find that both DTU and GBMT indeed tend to use larger translation units on ZH–EN. However, more smaller translation units are used on DE–EN.5 We presume this is because long-distance reordering is performed more often on ZH–EN than on DE–EN. Based on the fact that the distortion function d measures the reordering distance, we find that the average distortion value in PB on ZH–EN MT02 is 18.4 and 5We have the same finding on all test sets. System # Rules ZH–EN DE–EN DTU 224M+ 352M+ GBMT 99M+ 153M+ Table 4: The number of rules in DTU and GBMT. 3.5 on DE–EN WMT11. Our observations suggest that the argument that discontinuous phrases allow decoders to use larger translation units should be considered with caution when we explain the benefit of discontinuity on different language pairs. Compared to PBMT, the Treelet system does not show consistent improvements. Our system achieves significantly better BLEU and METEOR scores than Treelet on both ZH–EN and DE–EN, and a better TER score on DE–EN. This suggests that continuous phrases are essential for system robustness since it helps to improve phrase coverage (Hanneman and Lavie, 2009). Lower phrase coverage in Treelet results in more short phrases being used, as shown in Figure 5. In addition, we find that both DTU and our systems do not achieve consistent improvements over Treelet in terms of TER. We observed that both DTU and our systems tend to produce longer translations than Treelet, which might cause unreliable TER evaluation in our experiments as TER favours shorter sentences (He and Way, 2010). Since discontinuous phrases produced by using syntactic information are fewer in number but 102 1 2 3 4 5 6 7 5 10 15 MT02 1 2 3 4 5 6 7 5 10 15 WMT11 [log2] Phrase Count Number of English Words per Phrase PBMT Treelet DTU GBMT Figure 5: Phrase Length Histogram for MT02 and WMT11. more reliable (Koehn et al., 2003), our GBMT system achieves comparable performance with DTU but uses significantly fewer rules, as shown in Table 4. After integrating the graph segmentation model to help subgraph selection, GBMT is further improved and the resulted system G2S has significantly better evaluation scores than DTU on both language pairs. However, our segmentation model is more helpful on DE–EN than ZH– EN. We find that the number of features learned on ZH–EN (25K+) is much less than on DE–EN (49K+). This may result in a lower feature coverage during decoding. The lower number of features in ZH–EN could be caused by the fact that the development set MT02 has many fewer sentences than WMT11. Accordingly, we suggest to use a larger development set during tuning to achieve better translation performance when the segmentation model is integrated. Our current model is more akin to addressing problems in phrase-based and treelet-based models by segmenting graphs into pieces rather than extracting a recursive grammar. Therefore, similar to those models, our model is weak at phrase reordering as well. However, we are interesting in the potential power of our model by incorporating lexical reordering (LR) models and comparing it with syntax-based models. Table 5 shows BLEU scores of the hierarchical phrase-based (HPB) system (Chiang, 2005) in Moses6 and GBMT combined with a word-based 6For a fairer comparison, we disallow target discontinuity in HPB rules. This means that a non-terminal on the target side is either the first symbol or the last symbol. System ZH–EN DE–EN MT04 MT05 WMT12 WMT13 GBMT+LR 36.0 33.9 20.6 23.6 HPB 36.1 34.1 20.3 22.8 Table 5: BLEU scores of a Moses hierarchical phrase-based system (HPB) and our system (GBMT) with a word-based lexical reordering model (LR). LR model (Koehn et al., 2005). We find that the LR model significantly improves our system. GBMT+LR is comparable with the Moses HPB model on Chinese–English and better than HPB on German–English. 5.3 Examples Figure 6 shows three examples from MT04 to better explain the differences of each system. Example 1 shows that systems which allow discontinuous phrases (namely Treelet, DTU, GBMT, and GSM) successfully translate a Chinese collocation “Yu ...Wuguan” to “have nothing to do with” while PBMT fails to catch the generalization since it only allows continuous phrases. In Example 2, Treelet translates a discontinuous phrase “Dui ...Zuofa” (to ...practice) only as “to” where an important target word “practice” is dropped. By contrast, bigram relations allow our systems (GBMT and GSM) to find a better phrase to translate: “De Zuofa” to “of practice”. In addition, DTU translates a discontinuous phrase “De Zuofa ...Buman” to “dissatisfaction with the approach of”. However, the phrase is actually not 103 Example 1 PBMT: the united states has indicated that the united states and north korea delegation has visited Treelet: the united states has indicated that it has nothing to do with the us delegation visited the north korea DTU: the united states has indicated that it has nothing to do with the us delegation visited north korea GBMT: the united states has indicated that it has nothing to do with the us delegation visited north korea GSM: the united states has indicated that it has nothing to do with the us delegation visited north korea REF: the american government said that it has nothing to do with the american delegation to visit north korea american government said with visit north korea of american delegation no tie Meiguo Zhengfu Biaoshi Yu Zoufang BeiHan De Meiguo Daibiaotuan Wuguan the united states has indicated that it has nothing to do with the us delegation visited north korea Example 2 PBMT: the united states government to brazil has repeatedly expressed its dissatisfaction . Treelet: the government of brazil to the united states has on many occasions expressed their discontent . DTU: the united states has repeatedly expressed its dissatisfaction with the approach of the government to brazil . GBMT: the us government has repeatedly expressed dissatisfaction with the practice of brazil . GSM: the us government has repeatedly expressed dissatisfaction with the practice of brazil . REF: the us government has expressed their resentment against this practice of brazil on many occasions . US government to Brazil of practice already many times express dissatisfaction . Meiguo Zhengfu Dui Baxi De Zuofa Yijing Duo Ci Biaoshi Buman . the us government has repeatedly expressed dissatisfaction with the practice of brazil . Example 3 PBMT: the government and all sectors of society should continue to explore in depth and draw on collective wisdom . Treelet: the government must continue to make in-depth discussions with various sectors of the community and the collective wisdom . DTU: the government must continue to work together with various sectors of the community to make an in-depth study and draw on collective wisdom . GBMT: the government must continue to work together with various sectors of the community in-depth study and draw on collective wisdom . GSM: the government must continue to make in-depth discussions with various sectors of the community and draw on collective wisdom . REF: the government must continue to hold thorough discussions with all walks of life to pool the wisdom of the masses . government must continue with society each community make in-depth discussion , draw collective wisdom . Zhengfu Wubi Jixu Yu Shehui Ge Jie Zuo Shengru Taolun , Jisi Guangyi . the government must continue to make in-depth discussions with various sectors of the community and draw on collective wisdom . Figure 6: Translation examples from MT04 produced by different systems. Each source sentence is annotated by dependency relations and additional bigram relations (dotted red edges). We also annotate phrase alignments produced by our system GSM. 104 linguistically motivated and could be unreliable. By disallowing phrases which are not connected in the input graph, GBMT and GSM produce better translations. Example 3 illustrates that our graph segmentation model helps to select better subgraphs. After obtaining a partial translation “the government must”, GSM chooses to translate a subgraph which covers a discontinuous phrase “Jixu . . . Zuo” to “continue to make” while GBMT translates “Jixu Yu” (continue . . . with) to “continue to work together with”. By selecting the proper subgraph to translate, GSM performs a better reordering on the translation. 6 Related Work Starting from sequence-based models, SMT has been benefiting increasingly from complex structures. Sequence-based MT: Since the breakthrough made by IBM on word-based models in the 1990s (Brown et al., 1993), SMT has developed rapidly. The PB model (Koehn et al., 2003) advanced the state-of-the-art by translating multi-word units, which makes it better able to capture local phenomena. However, a major drawback in PBMT is that only continuous phrases are considered. Galley and Manning (2010) extend PBMT by allowing discontinuity. However, without linguistic structure information such as syntax trees, sequence-based models can learn a large amount of phrases which may be unreliable. Tree-based MT: Compared to sequences, trees provide recursive structures over sentences and can handle long-distance relations. Typically, trees used in SMT are either phrasal structures (Galley et al., 2004; Liu et al., 2006; Marcu et al., 2006) or dependency structures (Menezes and Quirk, 2005; Xiong et al., 2007; Xie et al., 2011; Li et al., 2014). However, conventional treebased models only use linguistically well-formed phrases. Although they are more reliable in theory, discarding all phrase pairs which are not linguistically motivated is an overly harsh decision. Therefore, exploring more translation rules usually can significantly improve translation performance (Marcu et al., 2006; DeNeefe et al., 2007; Wang et al., 2007; Mi et al., 2008). Graph-based MT: Compared to sequences and trees, graphs are more general and can represent more relations between words. In recent years, graphs have been drawing quite a lot of attention from researchers. Jones et al. (2012) propose a hypergraph-based translation model where hypergraphs are taken as a meaning representation of sentences. However, large corpora with annotated hypergraphs are not readily available for MT. Li et al. (2015) use an edge replacement grammar to translate dependency graphs which are converted from dependency trees by labeling edges. However, their model only focuses on subgraphs which cover continuous phrases. 7 Conclusion In this paper, we extend the conventional phrasebased translation model by allowing discontinuous phrases. We use graphs which combine bigram and dependency relations together as inputs and present a graph-based translation model. Experiments on Chinese–English and German–English show our model to be significantly better than the phrase-based model as well as other more sophisticated models. In addition, we present a graph segmentation model to explicitly guide the selection of subgraphs. In experiments, this model further improves our system. In the future, we will extend this model to allow discontinuity on target sides and explore the possibility of directly encoding reordering information in translation rules. We are also interested in using graphs for neural machine translation to see how it can translate and benefit from graphs. Acknowledgments This research has received funding from the People Programme (Marie Curie Actions) of the European Union’s Framework Programme (FP7/20072013) under REA grant agreement no 317471 and the European Union’s Horizon 2020 research and innovation programme under grant agreement no 645452 (QT21). The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. The authors thank all anonymous reviewers for their insightful comments and suggestions. References Bernd Bohnet. 2010. Very High Accuracy and Fast Dependency Parsing is Not a Contradiction. In Proceedings of the 23rd International Conference on 105 Computational Linguistics, pages 89–97, Beijing, China, August. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263– 311. Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese Word Segmentation for Machine Translation Performance. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 224–232, Columbus, Ohio, June. Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative Reordering with Chinese Grammatical Relations Features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation, pages 51–59, Boulder, Colorado, June. Stanley F. Chen and Joshua Goodman. 1996. An Empirical Study of Smoothing Techniques for Language Modeling. In Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL ’96, pages 310–318, Santa Cruz, California, June. Colin Cherry and George Foster. 2012. Batch Tuning Strategies for Statistical Machine Translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 427–436, Montreal, Canada, June. Colin Cherry. 2013. Improved Reordering for PhraseBased Translation using Sparse Features. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 22–31, Atlanta, Georgia, June. David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 263–270, Ann Arbor, Michigan, June. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, pages 176–181, Portland, Oregon, June. Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What Can Syntax-Based MT Learn from Phrase-Based MT? In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 755– 763, Prague, Czech Republic, June. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85–91, Edinburgh, Scotland, July. Michel Galley and Christopher D. Manning. 2010. Accurate Non-hierarchical Phrase-Based Translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 966–974, Los Angeles, California, June. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a Translation Rule? In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLTNAACL 2004, page 273280, Boston, Massachusetts, USA, May. Greg Hanneman and Alon Lavie. 2009. Decoding with Syntactic and Non-syntactic Phrases in a Syntaxbased Machine Translation System. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation, pages 1–9, Boulder, Colorado, June. Yifan He and Andy Way. 2010. Metric and reference factors in minimum error rate training. Machine Translation, 24(1):27–38. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyperedge Replacement Grammars. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, pages 1359–1376, Mumbai, India, December. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, pages 48–54, Edmonton, Canada, July. Philipp Koehn, Amittai Axelrod, Alexandra Birch, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In Proceedings of the International Workshop on Spoken Language Translation 2005, pages 68–75, Pittsburgh, PA, USA, October. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration 106 Sessions, pages 177–180, Prague, Czech Republic, June. Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York, NY, USA, 1st edition. Liangyou Li, Jun Xie, Andy Way, and Qun Liu. 2014. Transformation and Decomposition for Efficiently Implementing and Improving Dependency-to-String Model In Moses. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, October. Liangyou Li, Andy Way, and Qun Liu. 2015. Dependency Graph-to-String Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, September. Lemao Liu and Liang Huang. 2014. Search-Aware Tuning for Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1942– 1952, Doha, Qatar, October. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string Alignment Template for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 609–616, Sydney, Australia, July. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. SPMT: Statistical Machine Translation with Syntactified Target Language Phrases. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 44–52, Sydney, Australia. Arul Menezes and Chris Quirk. 2005. Dependency Treelet Translation: The Convergence of Statistical and Example-Based Machine-translation? In Proceedings of the Workshop on Example-based Machine Translation at MT Summit X, September. Haitao Mi, Liang Huang, and Qun Liu. 2008. ForestBased Translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 192–199, Columbus, Ohio, USA, June. Joakim Nivre and Jens Nilsson. 2005. PseudoProjective Dependency Parsing. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 99–106, Ann Arbor, Michigan, June. Franz Josef Och and Hermann Ney. 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 295–302, Philadelphia, PA, USA, July. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19–51, March. Franz Josef Och and Hermann Ney. 2004. The Alignment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4):417– 449, December. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, July. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency Treelet Translation: Syntactically Informed Phrasal SMT. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 271–279, Ann Arbor, Michigan, June. M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceedings of Association for Machine Translation in the Americas, pages 223–231, Cambridge, Massachusetts, USA, August. Andreas Stolcke. 2002. SRILM An Extensible Language Modeling Toolkit. In Proceedings of the International Conference Spoken Language Processing, pages 901–904, Denver, CO. Wei Wang, Kevin Knight, and Daniel Marcu. 2007. Binarizing Syntax Trees to Improve Syntax-Based Machine Translation Accuracy. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 746–754, Prague, Czech Republic, June. Jun Xie, Haitao Mi, and Qun Liu. 2011. A Novel Dependency-to-string Model for Statistical Machine Translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 216–226, Edinburgh, United Kingdom, July. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 521– 528, Sydney, Australia, July. Deyi Xiong, Qun Liu, and Shouxun Lin. 2007. A Dependency Treelet String Correspondence Model for Statistical Machine Translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 40–47, Prague, June. 107
2016
10
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1054–1063, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models Minh-Thang Luong and Christopher D. Manning Computer Science Department, Stanford University, Stanford, CA 94305 {lmthang,manning}@stanford.edu Abstract Nearly all previous work on neural machine translation (NMT) has used quite restricted vocabularies, perhaps with a subsequent method to patch in unknown words. This paper presents a novel wordcharacter solution to achieving open vocabulary NMT. We build hybrid systems that translate mostly at the word level and consult the character components for rare words. Our character-level recurrent neural networks compute source word representations and recover unknown target words when needed. The twofold advantage of such a hybrid approach is that it is much faster and easier to train than character-based ones; at the same time, it never produces unknown words as in the case of word-based models. On the WMT’15 English to Czech translation task, this hybrid approach offers an addition boost of +2.1−11.4 BLEU points over models that already handle unknown words. Our best system achieves a new state-of-the-art result with 20.7 BLEU score. We demonstrate that our character models can successfully learn to not only generate well-formed words for Czech, a highly-inflected language with a very complex vocabulary, but also build correct representations for English source words. 1 Introduction Neural Machine Translation (NMT) is a simple new architecture for getting machines to translate. At its core, NMT is a single deep neural network that is trained end-to-end with several advantages such as simplicity and generalization. Despite being relatively new, NMT has already achieved Figure 1: Hybrid NMT – example of a wordcharacter model for translating “a cute cat” into “un joli chat”. Hybrid NMT translates at the word level. For rare tokens, the character-level components build source representations and recover target <unk>. “_” marks sequence boundaries. state-of-the-art translation results for several language pairs such as English-French (Luong et al., 2015b), English-German (Jean et al., 2015a; Luong et al., 2015a; Luong and Manning, 2015), and English-Czech (Jean et al., 2015b). While NMT offers many advantages over traditional phrase-based approaches, such as small memory footprint and simple decoder implementation, nearly all previous work in NMT has used quite restricted vocabularies, crudely treating all other words the same with an <unk> symbol. Sometimes, a post-processing step that patches in unknown words is introduced to alleviate this problem. Luong et al. (2015b) propose to annotate 1054 occurrences of target <unk> with positional information to track their alignments, after which simple word dictionary lookup or identity copy can be performed to replace <unk> in the translation. Jean et al. (2015a) approach the problem similarly but obtain the alignments for unknown words from the attention mechanism. We refer to these as the unk replacement technique. Though simple, these approaches ignore several important properties of languages. First, monolingually, words are morphologically related; however, they are currently treated as independent entities. This is problematic as pointed out by Luong et al. (2013): neural networks can learn good representations for frequent words such as “distinct”, but fail for rare-but-related words like “distinctiveness”. Second, crosslingually, languages have different alphabets, so one cannot naïvely memorize all possible surface word translations such as name transliteration between “Christopher” (English) and “Kry˘stof” (Czech). See more on this problem in (Sennrich et al., 2016). To overcome these shortcomings, we propose a novel hybrid architecture for NMT that translates mostly at the word level and consults the character components for rare words when necessary. As illustrated in Figure 1, our hybrid model consists of a word-based NMT that performs most of the translation job, except for the two (hypothetically) rare words, “cute” and “joli”, that are handled separately. On the source side, representations for rare words, “cute”, are computed on-thefly using a deep recurrent neural network that operates at the character level. On the target side, we have a separate model that recovers the surface forms, “joli”, of <unk> tokens character-bycharacter. These components are learned jointly end-to-end, removing the need for a separate unk replacement step as in current NMT practice. Our hybrid NMT offers a twofold advantage: it is much faster and easier to train than characterbased models; at the same time, it never produces unknown words as in the case of word-based ones. We demonstrate at scale that on the WMT’15 English to Czech translation task, such a hybrid approach provides an additional boost of +2.1−11.4 BLEU points over models that already handle unknown words. We achieve a new state-of-theart result with 20.7 BLEU score. Our analysis demonstrates that our character models can successfully learn to not only generate well-formed words for Czech, a highly-inflected language with a very complex vocabulary, but also build correct representations for English source words. We provide code, data, and models at http: //nlp.stanford.edu/projects/nmt. 2 Related Work There has been a recent line of work on end-toend character-based neural models which achieve good results for part-of-speech tagging (dos Santos and Zadrozny, 2014; Ling et al., 2015a), dependency parsing (Ballesteros et al., 2015), text classification (Zhang et al., 2015), speech recognition (Chan et al., 2016; Bahdanau et al., 2016), and language modeling (Kim et al., 2016; Jozefowicz et al., 2016). However, success has not been shown for cross-lingual tasks such as machine translation.1 Sennrich et al. (2016) propose to segment words into smaller units and translate just like at the word level, which does not learn to understand relationships among words. Our work takes inspiration from (Luong et al., 2013) and (Li et al., 2015). Similar to the former, we build representations for rare words on-the-fly from subword units. However, we utilize recurrent neural networks with characters as the basic units; whereas Luong et al. (2013) use recursive neural networks with morphemes as units, which requires existence of a morphological analyzer. In comparison with (Li et al., 2015), our hybrid architecture is also a hierarchical sequence-to-sequence model, but operates at a different granularity level, word-character. In contrast, Li et al. (2015) build hierarchical models at the sentence-word level for paragraphs and documents. 3 Background & Our Models Neural machine translation aims to directly model the conditional probability p(y|x) of translating a source sentence, x1, . . . , xn, to a target sentence, y1, . . . , ym. It accomplishes this goal through an encoder-decoder framework (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014). The encoder computes a representation s for each source sentence. Based on that source 1Recently, Ling et al. (2015b) attempt character-level NMT; however, the experimental evidence is weak. The authors demonstrate only small improvements over word-level baselines and acknowledge that there are no differences of significance. Furthermore, only small datasets were used without comparable results from past NMT work. 1055 representation, the decoder generates a translation, one target word at a time, and hence, decomposes the log conditional probability as: log p(y|x) = Xm t=1 log p (yt|y<t, s) (1) A natural model for sequential data is the recurrent neural network (RNN), used by most of the recent NMT work. Papers, however, differ in terms of: (a) architecture – from unidirectional, to bidirectional, and deep multi-layer RNNs; and (b) RNN type – which are long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the gated recurrent unit (Cho et al., 2014). All our models utilize the deep multi-layer architecture with LSTM as the recurrent unit; detailed formulations are in (Zaremba et al., 2014). Considering the top recurrent layer in a deep LSTM, with ht being the current target hidden state as in Figure 2, one can compute the probability of decoding each target word yt as: p (yt|y<t, s) = softmax (ht) (2) For a parallel corpus D, we train our model by minimizing the below cross-entropy loss: J = X (x,y)∈D −log p(y|x) (3) Attention Mechanism – The early NMT approaches (Sutskever et al., 2014; Cho et al., 2014), which we have described above, use only the last encoder state to initialize the decoder, i.e., setting the input representation s in Eq. (1) to [¯hn]. Recently, Bahdanau et al. (2015) propose an attention mechanism, a form of random access memory for NMT to cope with long input sequences. Luong et al. (2015a) further extend the attention mechanism to different scoring functions, used to compare source and target hidden states, as well as different strategies to place the attention. In all our models, we utilize the global attention mechanism and the bilinear form for the attention scoring function similar to (Luong et al., 2015a). Specifically, we set s in Eq. (1) to the set of source hidden states at the top layer, [¯h1, . . . , ¯hn]. As illustrated in Figure 2, the attention mechanism consists of two stages: (a) context vector – the current hidden state ht is compared with individual source hidden states in s to learn an alignment vector, which is then used to compute the context vector ct as a weighted average of s; and (b) attentional hidden state – the context vector ct is then yt ct ¯h1 ¯hn ht ˜ht Figure 2: Attention mechanism. used to derive a new attentional hidden state: ˜ht = tanh(W[ct; ht]) (4) The attentional vector ˜ht then replaces ht in Eq. (2) in predicting the next word. 4 Hybrid Neural Machine Translation Our hybrid architecture, illustrated in Figure 1, leverages the power of both words and characters to achieve the goal of open vocabulary NMT. The core of the design is a word-level NMT with the advantage of being fast and easy to train. The character components empower the word-level system with the abilities to compute any source word representation on the fly from characters and to recover character-by-character unknown target words originally produced as <unk>. 4.1 Word-based Translation as a Backbone The core of our hybrid NMT is a deep LSTM encoder-decoder that translates at the word level as described in Section 3. We maintain a vocabulary of |V | frequent words for each language. Other words not inside these lists are represented by a universal symbol <unk>, one per language. We translate just like a word-based NMT system with respect to these source and target vocabularies, except for cases that involve <unk> in the source input or the target output. These correspond to the character-level components illustrated in Figure 1. A nice property of our hybrid approach is that by varying the vocabulary size, one can control 1056 how much to blend the word- and character-based models; hence, taking the best of both worlds. 4.2 Source Character-based Representation In regular word-based NMT, for all rare words outside the source vocabulary, one feeds the universal embedding representing <unk> as input to the encoder. This is problematic because it discards valuable information about the source word. To fix that, we learn a deep LSTM model over characters of source words. For example, in Figure 1, we run our deep character-based LSTM over ‘c’, ‘u’, ‘t’, ‘e’, and ‘_’ (the boundary symbol). The final hidden state at the top layer will be used as the on-the-fly representation for the current rare word. The layers of the deep character-based LSTM are always initialized with zero states. One might propose to connect hidden states of the wordbased LSTM to the character-based model; however, we chose this design for various reasons. First, it simplifies the architecture. Second, it allows for efficiency through precomputation: before each mini-batch, we can compute representations for rare source words all at once. All instances of the same word share the same embedding, so the computation is per type.2 4.3 Target Character-level Generation General word-based NMT allows generation of <unk> in the target output. Afterwards, there is usually a post-processing step that handles these unknown tokens by utilizing the alignment information derived from the attention mechanism and then performing simple word dictionary lookup or identity copy (Luong et al., 2015a; Jean et al., 2015a). While this approach works, it suffers from various problems such as alphabet mismatches between the source and target vocabularies and multi-word alignments. Our goal is to address all these issues and create a coherent framework that handles an unlimited output vocabulary. Our solution is to have a separate deep LSTM that “translates” at the character level given the current word-level state. We train our system such that whenever the word-level NMT produces an <unk>, we can consult this character-level decoder to recover the correct surface form of the unknown target word. This is illustrated in Figure 1. 2While Ling et al. (2015b) found that it is slow and difficult to train source character-level models and had to resort to pretraining, we demonstrate later that we can train our deep character-level LSTM perfectly fine in an end-to-end fashion. The training objective in Eq. (3) now becomes: J = Jw + αJc (5) Here, Jw refers to the usual loss of the wordlevel NMT; in our example, it is the sum of the negative log likelihood of generating {“un”, “<unk>”, “chat”, “_”}. The remaining component Jc corresponds to the loss incurred by the character-level decoder when predicting characters, e.g., {‘j’, ‘o’, ‘l’, ‘i’, ‘_’}, of those rare words not in the target vocabulary. Hidden-state Initialization Unlike the source character-based representations, which are context-independent, the target character-level generation requires the current word-level context to produce meaningful translation. This brings up an important question about what can best represent the current context so as to initialize the character-level decoder. We answer this question in the context of the attention mechanism (§3). The final vector ˜ht, just before the softmax as shown in Figure 2, seems to be a good candidate to initialize the character-level decoder. The reason is that ˜ht combines information from both the context vector ct and the top-level recurrent state ht. We refer to it later in our experiments as the same-path target generation approach. On the other hand, the same-path approach worries us because all vectors ˜ht used to seed the character-level decoder might have similar values, leading to the same character sequence being produced. The reason is because ˜ht is directly used in the softmax, Eq. (2), to predict the same <unk>. That might pose some challenges for the model to learn useful representations that can be used to accomplish two tasks at the same time, that is to predict <unk> and to generate character sequences. To address that concern, we propose another approach called the separate-path target generation. Our separate-path target generation approach works as follows. We mimic the process described in Eq. (4) to create a counterpart vector ˘ht that will be used to seed the character-level decoder: ˘ht = tanh( ˘ W [ct; ht]) (6) Here, ˘ W is a new learnable parameter matrix, with which we hope to release W from the pressure of having to extract information relevant to both the word- and character-generation processes. Only the hidden state of the first layer 1057 is initialized as discussed above. The other components in the character-level decoder such as the LSTM cells of all layers and the hidden states of higher layers, all start with zero values. Implementation-wise, the computation in the character-level decoder is done per word token instead of per type as in the source character component (§4.2). This is because of the contextdependent nature of the decoder. Word-Character Generation Strategy With the character-level decoder, we can view the final hidden states as representations for the surface forms of unknown tokens and could have fed these to the next time step. However, we chose not to do so for the efficiency reason explained next; instead, <unk> is fed to the word-level decoder “as is” using its corresponding word embedding. During training, this design choice decouples all executions over <unk> instances of the character-level decoder as soon the word-level NMT completes. As such, the forward and backward passes of the character-level decoder over rare words can be invoked in batch mode. At test time, our strategy is to first run a beam search decoder at the word level to find the best translations given by the word-level NMT. Such translations contains <unk> tokens, so we utilize our character-level decoder with beam search to generate actual words for these <unk>. 5 Experiments We evaluate the effectiveness of our models on the publicly available WMT’15 translation task from English into Czech with newstest2013 (3000 sentences) as a development set and newstest2015 (2656 sentences) as a test set. Two metrics are used: case-sensitive NIST BLEU (Papineni et al., 2002) and chrF3 (Popovi´c, 2015).3 The latter measures the amounts of overlapping character ngrams and has been argued to be a better metric for translation tasks out of English. 5.1 Data Among the available language pairs in WMT’15, all involving English, we choose Czech as a target language for several reasons. First and foremost, Czech is a Slavic language with not only rich and 3For NIST BLEU, we first run detokenizer.pl and then use mteval-v13a to compute the scores as per WMT guideline. For chrF3, we utilize the implementation here https://github.com/rsennrich/subword-nmt. English Czech word char word char # Sents 15.8M # Tokens 254M 1,269M 224M 1,347M # Types 1,172K 2003 1,760K 2053 200-char 98.1% 98.8% Table 1: WMT’15 English-Czech data – shown are various statistics of our training data such as sentence, token (word and character counts), as well as type (sizes of the word and character vocabularies). We show in addition the amount of words in a vocabulary expressed by a list of 200 characters found in frequent words. complex inflection, but also fusional morphology in which a single morpheme can encode multiple grammatical, syntactic, or semantic meanings. As a result, Czech possesses an enormously large vocabulary (about 1.5 to 2 times bigger than that of English according to statistics in Table 1) and is a challenging language to translate into. Furthermore, this language pair has a large amount of training data, so we can evaluate at scale. Lastly, though our techniques are language independent, it is easier for us to work with Czech since Czech uses the Latin alphabet with some diacritics. In terms of preprocessing, we apply only the standard tokenization practice.4 We choose for each language a list of 200 characters found in frequent words, which, as shown in Table 1, can represent more than 98% of the vocabulary. 5.2 Training Details We train three types of systems, purely wordbased, purely character-based, and hybrid. Common to these architectures is a word-based NMT since the character-based systems are essentially word-based ones with longer sequences and the core of hybrid models is also a word-based NMT. In training word-based NMT, we follow Luong et al. (2015a) to use the global attention mechanism together with similar hyperparameters: (a) deep LSTM models, 4 layers, 1024 cells, and 1024-dimensional embeddings, (b) uniform initialization of parameters in [−0.1, 0.1], (c) 6epoch training with plain SGD and a simple learning rate schedule – start with a learning rate of 1.0; after 4 epochs, halve the learning rate every 0.5 epoch, (d) mini-batches are of size 128 and shuf4Use tokenizer.perl in Moses with default settings. 1058 System Vocab Perplexity BLEU chrF3 w c (a) Best WMT’15, big data (Bojar and Tamchyna, 2015) 18.8 Existing NMT (b) RNNsearch + unk replace (Jean et al., 2015b) 200K 15.7 (c) Ensemble 4 models + unk replace (Jean et al., 2015b) 200K 18.3 Our word-based NMT (d) Base + attention + unk replace 50K 5.9 17.5 42.4 (e) Ensemble 4 models + unk replace 50K 18.4 43.9 Our character-based NMT (f) Base-512 (600-step backprop) 200 2.4 3.8 25.9 (g) Base-512 + attention (600-step backprop) 200 1.6 17.5 46.6 (h) Base-1024 + attention (300-step backprop) 200 1.9 15.7 41.1 Our hybrid NMT (i) Base + attention + same-path 10K 4.9 1.7 14.1 37.2 (j) Base + attention + separate-path 10K 4.9 1.7 15.6 39.6 (k) Base + attention + separate-path + 2-layer char 10K 4.7 1.6 17.7 44.1 (l) Base + attention + separate-path + 2-layer char 50K 5.7 1.6 19.6 46.5 (m) Ensemble 4 models 50K 20.7 47.5 Table 2: WMT’15 English-Czech results – shown are the vocabulary sizes, perplexities, BLEU, and chrF3 scores of various systems on newstest2015. Perplexities are listed under two categories, word (w) and character (c). Best and important results per metric are highlighed. fled, (e) the gradient is rescaled whenever its norm exceeds 5, and (f) dropout is used with probability 0.2 according to (Pham et al., 2014). We now detail differences across the three architectures. Word-based NMT – We constrain our source and target sequences to have a maximum length of 50 each; words that go past the boundary are ignored. The vocabularies are limited to the top |V | most frequent words in both languages. Words not in these vocabularies are converted into <unk>. After translating, we will perform dictionary5 lookup or identity copy for <unk> using the alignment information from the attention models. Such procedure is referred as the unk replace technique (Luong et al., 2015b; Jean et al., 2015a). Character-based NMT – The source and target sequences at the character level are often about 5 times longer than their counterparts in the wordbased models as we can infer from the statistics in Table 1. Due to memory constraint in GPUs, we limit our source and target sequences to a maximum length of 150 each, i.e., we backpropagate through at most 300 timesteps from the decoder to the encoder. With smaller 512-dimensional models, we can afford to have longer sequences with 5Obtained from the alignment links produced by the Berkeley aligner (Liang et al., 2006) over the training corpus. up to 600-step backpropagation. Hybrid NMT – The word-level component uses the same settings as the purely word-based NMT. For the character-level source and target components, we experiment with both shallow and deep 1024-dimensional models of 1 and 2 LSTM layers. We set the weight α in Eq. (5) for our character-level loss to 1.0. Training Time – It takes about 3 weeks to train a word-based model with |V | = 50K and about 3 months to train a character-based model. Training and testing for the hybrid models are about 1020% slower than those of the word-based models with the same vocabulary size. 5.3 Results We compare our models with several strong systems. These include the winning entry in WMT’15, which was trained on a much larger amount of data, 52.6M parallel and 393.0M monolingual sentences (Bojar and Tamchyna, 2015).6 In contrast, we merely use the provided parallel corpus of 15.8M sentences. For NMT, to the best 6This entry combines two independent systems, a phrasebased Moses model and a deep-syntactic transfer-based model. Additionally, there is an automatic post-editing system with hand-crafted rules to correct errors in morphological agreement and semantic meanings, e.g., loss of negation. 1059 of our knowledge, (Jean et al., 2015b) has the best published performance on English-Czech. As shown in Table 2, for a purely word-based approach, our single NMT model outperforms the best single model in (Jean et al., 2015b) by +1.8 points despite using a smaller vocabulary of only 50K words versus 200K words. Our ensemble system (e) slightly outperforms the best previous NMT system with 18.4 BLEU. To our surprise, purely character-based models, though extremely slow to train and test, perform quite well. The 512-dimensional attention-based model (g) is best, surpassing the single wordbased model in (Jean et al., 2015b) despite having much fewer parameters. It even outperforms most NMT systems on chrF3 with 46.6 points. This indicates that this model translate words that closely but not exactly match the reference ones as evidenced in Section 6.3. We notice two interesting observations. First, attention is critical for character-based models to work as is obvious from the poor performance of the non-attentional model; this has also been shown in speech recognition (Chan et al., 2016). Second, long time-step backpropagation is more important as reflected by the fact that the larger 1024-dimensional model (h) with shorter backprogration is inferior to (g). Our hybrid models achieve the best results. At 10K words, we demonstrate that our separatepath strategy for the character-level target generation (§4.3) is effective, yielding an improvement of +1.5 BLEU points when comparing systems (j) vs. (i). A deeper character-level architecture of 2 LSTM layers provides another significant boost of +2.1 BLEU. With 17.7 BLEU points, our hybrid system (k) has surpassed word-level NMT models. When extending to 50K words, we further improve the translation quality. Our best single model, system (l) with 19.6 BLEU, is already better than all existing systems. Our ensemble model (m) further advances the SOTA result to 20.7 BLEU, outperforming the winning entry in the WMT’15 English-Czech translation task by a large margin of +1.9 points. Our ensemble model is also best in terms of chrF3 with 47.5 points. 6 Analysis This section first studies the effects of vocabulary sizes towards translation quality. We then analyze more carefully our character-level components by visualizing and evaluating rare word embeddings 0 10 20 30 40 50 0 5 10 15 20 Vocabulary Size (x1000) BLEU Word Word + unk replace Hybrid +11.4 +5.0 +2.1 +3.5 Figure 3: Vocabulary size effect – shown are the performances of different systems as we vary their vocabulary sizes. We highlight the improvements obtained by our hybrid models over word-based systems which already handle unknown words. as well as examining sample translations. 6.1 Effects of Vocabulary Sizes As shown in Figure 3, our hybrid models offer large gains of +2.1-11.4 BLEU points over strong word-based systems which already handle unknown words. With only a small vocabulary, e.g., 1000 words, our hybrid approach can produce systems that are better than word-based models that possess much larger vocabularies. While it appears from the plot that gains diminish as we increase the vocabulary size, we argue that our hybrid models are still preferable since they understand word structures and can handle new complex words at test time as illustrated in Section 6.3. 6.2 Rare Word Embeddings We evaluate the source character-level model by building representations for rare words and measuring how good these embeddings are. Quantitatively, we follow Luong et al. (2013) in using the word similarity task, specifically on the Rare Word dataset, to judge the learned representations for complex words. The evaluation metric is the Spearman’s correlation ρ between similarity scores assigned by a model and by human annotators. From the results in Table 3, we can see that source representations produced by our hybrid7 models are significantly better than those of the word-based one. It is noteworthy that our deep recurrent character-level models can outperform the model of (Luong et al., 2013), which uses recursive neural networks and requires a complex morphological analyzer, by a large margin. Our performance is also competitive to the best Glove 7We look up the encoder embeddings for frequent words and build representations for rare word from characters. 1060 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 acceptable acknowledgement admission admit admittanceadmitting advance antagonist choose chooses connect decide develop developments evidently explicit founder governance immobile immoveable impossible insensitive insufficiency link management necessary nominated noticeable obvious perceptible possible practice satisfactory sponsor unacceptable unaffected uncomfortable unsatisfactory unsuitable antagonize cofounders companionships disrespectful heartlessly heartlessness illiberal impossibilities inabilities loveless narrow-minded narrow-mindedness nonconscious regretful spiritless unattainableness unconcern uncontroversial unfeathered unfledged ungraceful unrealizable unsighted untrustworthy wholeheartedness Figure 4: Barnes-Hut-SNE visualization of source word representations – shown are sample words from the Rare Word dataset. We differentiate two types of embeddings: frequent words in which encoder embeddings are looked up directly and rare words where we build representations from characters. Boxes highlight examples that we will discuss in the text. We use the hybrid model (l) in this visualization. embeddings (Pennington et al., 2014) which were trained on a much larger dataset. System Size |V | ρ (Luong et al., 2013) 1B 138K 34.4 Glove (Pennington et al., 2014) 6B 400K 38.1 42B 400K 47.8 Our NMT models (d) Word-based 0.3B 50K 20.4 (k) Hybrid 0.3B 10K 42.4 (l) Hybrid 0.3B 50K 47.1 Table 3: Word similarity task – shown are Spearman’s correlation ρ on the Rare Word dataset of various models (with different vocab sizes |V |). Qualitatively, we visualize embeddings produced by the hybrid model (l) for selected words in the Rare Word dataset. Figure 4 shows the two-dimensional representations of words computed by the Barnes-Hut-SNE algorithm (van der Maaten, 2013).8 It is extremely interesting to observe that words are clustered together not only by the word structures but also by the meanings. For example, in the top-left box, the characterbased representations for “loveless”, “spiritless”, “heartlessly”, and “heartlessness” are nearby, but clearly separated into two groups. Similarly, in the 8We run Barnes-Hut-SNE algorithm over a set of 91 words, but filter out 27 words for displaying clarity. center boxes, word-based embeddings of “acceptable”, “satisfactory”, “unacceptable”, and “unsatisfactory”, are close by but separated by meanings. Lastly, the remaining boxes demonstrate that our character-level models are able to build representations comparable to the word-based ones, e.g., “impossibilities” vs. “impossible” and “antagonize” vs. “antagonist”. All of this evidence strongly supports that the source character-level models are useful and effective. 6.3 Sample Translations We show in Table 4 sample translations between various systems. In the first example, our hybrid model translates perfectly. The word-based model fails to translate “diagnosis” because the second <unk> was incorrectly aligned to the word “after”. The character-based model, on the other hand, makes a mistake in translating names. For the second example, the hybrid model surprises us when it can capture the long-distance reordering of “fifty years ago” and “p˘red padesáti lety” while the other two models do not. The word-based model translates “Jr.” inaccurately due to the incorrect alignment between the second <unk> and the word “said”. The characterbased model literally translates the name “King” into “král” which means “king”. Lastly, both the character-based and hybrid 1061 1 source The author Stephen Jay Gould died 20 years after diagnosis . human Autor Stephen Jay Gould zem˘rel 20 let po diagnóze . word Autor Stephen Jay <unk> zem˘rel 20 let po <unk> . Autor Stephen Jay Gould zem˘rel 20 let po po . char Autor Stepher Stepher zem˘rel 20 let po diagnóze . hybrid Autor <unk> <unk> <unk> zem˘rel 20 let po <unk>. Autor Stephen Jay Gould zem˘rel 20 let po diagnóze . 2 source As the Reverend Martin Luther King Jr. said fifty years ago : human Jak p˘red padesáti lety ˘rekl reverend Martin Luther King Jr . : word Jak ˘rekl reverend Martin <unk> King <unk> p˘red padesáti lety : Jak ˘rekl reverend Martin Luther King ˘rekl p˘red padesáti lety : char Jako reverend Martin Luther král ˘ríkal p˘red padesáti lety : hybrid Jak p˘red <unk> lety ˘rekl <unk> Martin <unk> <unk> <unk> : Jak p˘red padesáti lety ˘rekl reverend Martin Luther King Jr. : 3 source Her 11-year-old daughter , Shani Bart , said it felt a " little bit weird " [..] back to school . human Její jedenáctiletá dcera Shani Bartová prozradila , ˘ze " je to trochu zvlá˘stní " [..] znova do ˘skoly . word Její <unk> dcera <unk> <unk> ˘rekla , ˘ze je to " trochu divné " , [..] vrací do ˘skoly . Její 11-year-old dcera Shani , ˘rekla , ˘ze je to " trochu divné " , [..] vrací do ˘skoly . char Její jedenáctiletá dcera , Shani Bartová , ˘ríkala , ˘ze cítí trochu divn˘e , [..] vrátila do ˘skoly . hybrid Její <unk> dcera , <unk> <unk> , ˘rekla , ˘ze cítí " trochu <unk> " , [..] vrátila do ˘skoly . Její jedenáctiletá dcera , Graham Bart , ˘rekla , ˘ze cítí " trochu divný " , [..] vrátila do ˘skoly . Table 4: Sample translations on newstest2015 – for each example, we show the source, human translation, and translations of the following NMT systems: word model (d), char model (g), and hybrid model (k). We show the translations before replacing <unk> tokens (if any) for the word-based and hybrid models. The following formats are used to highlight correct, wrong, and close translation segments. models impress us by their ability to translate compound words exactly, e.g., “11-year-old” and “jedenáctiletá”; whereas the identity copy strategy of the word-based model fails. Of course, our hybrid model does make mistakes, e.g., it fails to translate the name “Shani Bart”. Overall, these examples highlight how challenging translating into Czech is and that being able to translate at the character level helps improve the quality. 7 Conclusion We have proposed a novel hybrid architecture that combines the strength of both word- and character-based models. Word-level models are fast to train and offer high-quality translation; whereas, character-level models help achieve the goal of open vocabulary NMT. We have demonstrated these two aspects through our experimental results and translation examples. Our best hybrid model has surpassed the performance of both the best word-based NMT system and the best non-neural model to establish a new state-of-the-art result for English-Czech translation in WMT’15 with 20.7 BLEU. Moreover, we have succeeded in replacing the standard unk replacement technique in NMT with our characterlevel components, yielding an improvement of +2.1−11.4 BLEU points. Our analysis has shown that our model has the ability to not only generate well-formed words for Czech, a highly inflected language with an enormous and complex vocabulary, but also build accurate representations for English source words. Additionally, we have demonstrated the potential of purely character-based models in producing good translations; they have outperformed past word-level NMT models. For future work, we hope to be able to improve the memory usage and speed of purely character-based models. Acknowledgments This work was partially supported by NSF Award IIS-1514268 and by a gift from Bloomberg L.P. We thank Dan Jurafsky, Andrew Ng, and Quoc Le for earlier feedback on the work, as well as Sam Bowman, Ziang Xie, and Jiwei Li for their valuable comments on the paper draft. Lastly, we thank NVIDIA Corporation for the donation of Tesla K40 GPUs as well as Andrew Ng and his group for letting us use their computing resources. 1062 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. End-to-end attention-based large vocabulary speech recognition. In ICASSP. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with LSTMs. In EMNLP. Ond˘rej Bojar and Ale˘s Tamchyna. 2015. CUNI in WMT15: Chimera Strikes Again. In WMT. William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2016. Listen, attend and spell. In ICASSP. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP. Cícero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In ICML. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. 9(8):1735–1780. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On using very large target vocabulary for neural machine translation. In ACL. Sébastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal neural machine translation systems for WMT’15. In WMT. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In AAAI. Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In ACL. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In NAACL. Wang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luís Marujo, and Tiago Luís. 2015a. Finding function in form: Compositional character models for open vocabulary word representation. In EMNLP. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan Black. 2015b. Character-based neural machine translation. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domain. In IWSLT. Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attentionbased neural machine translation. In EMNLP. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In ACL. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Vu Pham, Théodore Bluche, Christopher Kermorvant, and Jérôme Louradour. 2014. Dropout improves recurrent neural networks for handwriting recognition. In ICFHR. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In WMT. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS. Laurens van der Maaten. 2013. Barnes-Hut-SNE. In ICLR. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. abs/1409.2329. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. 1063
2016
100
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1064–1074, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF Xuezhe Ma and Eduard Hovy Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected], [email protected] Abstract State-of-the-art sequence labeling systems traditionally require large amounts of taskspecific knowledge in the form of handcrafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data preprocessing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks — Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both datasets — 97.55% accuracy for POS tagging and 91.21% F1 for NER. 1 Introduction Linguistic sequence labeling, such as part-ofspeech (POS) tagging and named entity recognition (NER), is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. Natural language processing (NLP) systems, like syntactic parsing (Nivre and Scholz, 2004; McDonald et al., 2005; Koo and Collins, 2010; Ma and Zhao, 2012a; Ma and Zhao, 2012b; Chen and Manning, 2014; Ma and Hovy, 2015) and entity coreference resolution (Ng, 2010; Ma et al., 2016), are becoming more sophisticated, in part because of utilizing output information of POS tagging or NER systems. Most traditional high performance sequence labeling models are linear statistical models, including Hidden Markov Models (HMM) and Conditional Random Fields (CRF) (Ratinov and Roth, 2009; Passos et al., 2014; Luo et al., 2015), which rely heavily on hand-crafted features and taskspecific resources. For example, English POS taggers benefit from carefully designed word spelling features; orthographic features and external resources such as gazetteers are widely used in NER. However, such task-specific knowledge is costly to develop (Ma and Xia, 2014), making sequence labeling models difficult to adapt to new tasks or new domains. In the past few years, non-linear neural networks with as input distributed word representations, also known as word embeddings, have been broadly applied to NLP problems with great success. Collobert et al. (2011) proposed a simple but effective feed-forward neutral network that independently classifies labels for each word by using contexts within a window with fixed size. Recently, recurrent neural networks (RNN) (Goller and Kuchler, 1996), together with its variants such as long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) and gated recurrent unit (GRU) (Cho et al., 2014), have shown great success in modeling sequential data. Several RNN-based neural network models have been proposed to solve sequence labeling tasks like speech recognition (Graves et al., 2013), POS tagging (Huang et al., 2015) and NER (Chiu and Nichols, 2015; Hu et al., 2016), achieving competitive performance against traditional models. However, even systems that have utilized distributed representations as inputs have used these to augment, rather than replace, hand-crafted features (e.g. word spelling and capitalization patterns). Their performance drops rapidly when the models solely depend on neural embeddings. 1064 In this paper, we propose a neural network architecture for sequence labeling. It is a truly endto-end model requiring no task-specific resources, feature engineering, or data pre-processing beyond pre-trained word embeddings on unlabeled corpora. Thus, our model can be easily applied to a wide range of sequence labeling tasks on different languages and domains. We first use convolutional neural networks (CNNs) (LeCun et al., 1989) to encode character-level information of a word into its character-level representation. Then we combine character- and word-level representations and feed them into bi-directional LSTM (BLSTM) to model context information of each word. On top of BLSTM, we use a sequential CRF to jointly decode labels for the whole sentence. We evaluate our model on two linguistic sequence labeling tasks — POS tagging on Penn Treebank WSJ (Marcus et al., 1993), and NER on English data from the CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003). Our end-to-end model outperforms previous stateof-the-art systems, obtaining 97.55% accuracy for POS tagging and 91.21% F1 for NER. The contributions of this work are (i) proposing a novel neural network architecture for linguistic sequence labeling. (ii) giving empirical evaluations of this model on benchmark data sets for two classic NLP tasks. (iii) achieving state-of-the-art performance with this truly end-to-end system. 2 Neural Network Architecture In this section, we describe the components (layers) of our neural network architecture. We introduce the neural layers in our neural network oneby-one from bottom to top. 2.1 CNN for Character-level Representation Previous studies (Santos and Zadrozny, 2014; Chiu and Nichols, 2015) have shown that CNN is an effective approach to extract morphological information (like the prefix or suffix of a word) from characters of words and encode it into neural representations. Figure 1 shows the CNN we use to extract character-level representation of a given word. The CNN is similar to the one in Chiu and Nichols (2015), except that we use only character embeddings as the inputs to CNN, without character type features. A dropout layer (Srivastava et al., 2014) is applied before character embeddings are input to CNN. P l a y i n g Padding Padding Char Embedding Convolution Max Pooling Char Representation Figure 1: The convolution neural network for extracting character-level representations of words. Dashed arrows indicate a dropout layer applied before character embeddings are input to CNN. 2.2 Bi-directional LSTM 2.2.1 LSTM Unit Recurrent neural networks (RNNs) are a powerful family of connectionist models that capture time dynamics via cycles in the graph. Though, in theory, RNNs are capable to capturing long-distance dependencies, in practice, they fail due to the gradient vanishing/exploding problems (Bengio et al., 1994; Pascanu et al., 2012). LSTMs (Hochreiter and Schmidhuber, 1997) are variants of RNNs designed to cope with these gradient vanishing problems. Basically, a LSTM unit is composed of three multiplicative gates which control the proportions of information to forget and to pass on to the next time step. Figure 2 gives the basic structure of an LSTM unit. Figure 2: Schematic of LSTM unit. 1065 Formally, the formulas to update an LSTM unit at time t are: it = σ(W iht−1 + U ixt + bi) ft = σ(W fht−1 + U fxt + bf) ˜ct = tanh(W cht−1 + U cxt + bc) ct = ft ⊙ct−1 + it ⊙˜ct ot = σ(W oht−1 + U oxt + bo) ht = ot ⊙tanh(ct) where σ is the element-wise sigmoid function and ⊙is the element-wise product. xt is the input vector (e.g. word embedding) at time t, and ht is the hidden state (also called output) vector storing all the useful information at (and before) time t. U i, U f, U c, U o denote the weight matrices of different gates for input xt, and W i, W f, W c, W o are the weight matrices for hidden state ht. bi, bf, bc, bo denote the bias vectors. It should be noted that we do not include peephole connections (Gers et al., 2003) in the our LSTM formulation. 2.2.2 BLSTM For many sequence labeling tasks it is beneficial to have access to both past (left) and future (right) contexts. However, the LSTM’s hidden state ht takes information only from past, knowing nothing about the future. An elegant solution whose effectiveness has been proven by previous work (Dyer et al., 2015) is bi-directional LSTM (BLSTM). The basic idea is to present each sequence forwards and backwards to two separate hidden states to capture past and future information, respectively. Then the two hidden states are concatenated to form the final output. 2.3 CRF For sequence labeling (or general structured prediction) tasks, it is beneficial to consider the correlations between labels in neighborhoods and jointly decode the best chain of labels for a given input sentence. For example, in POS tagging an adjective is more likely to be followed by a noun than a verb, and in NER with standard BIO2 annotation (Tjong Kim Sang and Veenstra, 1999) I-ORG cannot follow I-PER. Therefore, we model label sequence jointly using a conditional random field (CRF) (Lafferty et al., 2001), instead of decoding each label independently. Formally, we use z = {z1, · · · , zn} to represent a generic input sequence where zi is the input vector of the ith word. y = {y1, · · · , yn} represents a generic sequence of labels for z. Y(z) denotes the set of possible label sequences for z. The probabilistic model for sequence CRF defines a family of conditional probability p(y|z; W, b) over all possible label sequences y given z with the following form: p(y|z; W, b) = nQ i=1 ψi(yi−1, yi, z) P y′∈Y(z) nQ i=1 ψi(y′ i−1, y′ i, z) where ψi(y′, y, z) = exp(WT y′,yzi + by′,y) are potential functions, and WT y′,y and by′,y are the weight vector and bias corresponding to label pair (y′, y), respectively. For CRF training, we use the maximum conditional likelihood estimation. For a training set {(zi, yi)}, the logarithm of the likelihood (a.k.a. the log-likelihood) is given by: L(W, b) = X i log p(y|z; W, b) Maximum likelihood training chooses parameters such that the log-likelihood L(W, b) is maximized. Decoding is to search for the label sequence y∗ with the highest conditional probability: y∗= argmax y∈Y(z) p(y|z; W, b) For a sequence CRF model (only interactions between two successive labels are considered), training and decoding can be solved efficiently by adopting the Viterbi algorithm. 2.4 BLSTM-CNNs-CRF Finally, we construct our neural network model by feeding the output vectors of BLSTM into a CRF layer. Figure 3 illustrates the architecture of our network in detail. For each word, the character-level representation is computed by the CNN in Figure 1 with character embeddings as inputs. Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network. Finally, the output vectors of BLSTM are fed to the CRF layer to jointly decode the best label sequence. As shown in Figure 3, dropout layers are applied on both the input and output vectors of BLSTM. Experimental results show that using dropout significantly 1066 We Word Embedding are playing soccer Char Representation Forward LSTM Backward LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM PRP VBP NN VBG CRF Layer Figure 3: The main architecture of our neural network. The character representation for each word is computed by the CNN in Figure 1. Then the character representation vector is concatenated with the word embedding before feeding into the BLSTM network. Dashed arrows indicate dropout layers applied on both the input and output vectors of BLSTM. improve the performance of our model (see Section 4.5 for details). 3 Network Training In this section, we provide details about training the neural network. We implement the neural network using the Theano library (Bergstra et al., 2010). The computations for a single model are run on a GeForce GTX TITAN X GPU. Using the settings discussed in this section, the model training requires about 12 hours for POS tagging and 8 hours for NER. 3.1 Parameter Initialization Word Embeddings. We use Stanford’s publicly available GloVe 100-dimensional embeddings1 trained on 6 billion words from Wikipedia and web text (Pennington et al., 2014) 1http://nlp.stanford.edu/projects/ glove/ We also run experiments on two other sets of published embeddings, namely Senna 50dimensional embeddings2 trained on Wikipedia and Reuters RCV-1 corpus (Collobert et al., 2011), and Google’s Word2Vec 300-dimensional embeddings3 trained on 100 billion words from Google News (Mikolov et al., 2013). To test the effectiveness of pretrained word embeddings, we experimented with randomly initialized embeddings with 100 dimensions, where embeddings are uniformly sampled from range [− q 3 dim, + q 3 dim] where dim is the dimension of embeddings (He et al., 2015). The performance of different word embeddings is discussed in Section 4.4. Character Embeddings. Character embeddings are initialized with uniform samples from [− q 3 dim, + q 3 dim], where we set dim = 30. Weight Matrices and Bias Vectors. Matrix parameters are randomly initialized with uniform samples from [− q 6 r+c, + q 6 r+c], where r and c are the number of of rows and columns in the structure (Glorot and Bengio, 2010). Bias vectors are initialized to zero, except the bias bf for the forget gate in LSTM , which is initialized to 1.0 (Jozefowicz et al., 2015). 3.2 Optimization Algorithm Parameter optimization is performed with minibatch stochastic gradient descent (SGD) with batch size 10 and momentum 0.9. We choose an initial learning rate of η0 (η0 = 0.01 for POS tagging, and 0.015 for NER, see Section 3.3.), and the learning rate is updated on each epoch of training as ηt = η0/(1 + ρt), with decay rate ρ = 0.05 and t is the number of epoch completed. To reduce the effects of “gradient exploding”, we use a gradient clipping of 5.0 (Pascanu et al., 2012). We explored other more sophisticated optimization algorithms such as AdaDelta (Zeiler, 2012), Adam (Kingma and Ba, 2014) or RMSProp (Dauphin et al., 2015), but none of them meaningfully improve upon SGD with momentum and gradient clipping in our preliminary experiments. Early Stopping. We use early stopping (Giles, 2001; Graves et al., 2013) based on performance on validation sets. The “best” parameters appear at around 50 epochs, according to our experiments. 2http://ronan.collobert.com/senna/ 3https://code.google.com/archive/p/ word2vec/ 1067 Layer Hyper-parameter POS NER CNN window size 3 3 number of filters 30 30 LSTM state size 200 200 initial state 0.0 0.0 peepholes no no Dropout dropout rate 0.5 0.5 batch size 10 10 initial learning rate 0.01 0.015 decay rate 0.05 0.05 gradient clipping 5.0 5.0 Table 1: Hyper-parameters for all experiments. Fine Tuning. For each of the embeddings, we fine-tune initial embeddings, modifying them during gradient updates of the neural network model by back-propagating gradients. The effectiveness of this method has been previously explored in sequential and structured prediction problems (Collobert et al., 2011; Peng and Dredze, 2015). Dropout Training. To mitigate overfitting, we apply the dropout method (Srivastava et al., 2014) to regularize our model. As shown in Figure 1 and 3, we apply dropout on character embeddings before inputting to CNN, and on both the input and output vectors of BLSTM. We fix dropout rate at 0.5 for all dropout layers through all the experiments. We obtain significant improvements on model performance after using dropout (see Section 4.5). 3.3 Tuning Hyper-Parameters Table 1 summarizes the chosen hyper-parameters for all experiments. We tune the hyper-parameters on the development sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, for the tasks of POS tagging and NER we try to share as many hyper-parameters as possible. Note that the final hyper-parameters for these two tasks are almost the same, except the initial learning rate. We set the state size of LSTM to 200. Tuning this parameter did not significantly impact the performance of our model. For CNN, we use 30 filters with window length 3. 4 Experiments 4.1 Data Sets As mentioned before, we evaluate our neural network model on two sequence labeling tasks: POS tagging and NER. Dataset WSJ CoNLL2003 Train SENT 38,219 14,987 TOKEN 912,344 204,567 Dev SENT 5,527 3,466 TOKEN 131,768 51,578 Test SENT 5,462 3,684 TOKEN 129,654 46,666 Table 2: Corpora statistics. SENT and TOKEN refer to the number of sentences and tokens in each data set. POS Tagging. For English POS tagging, we use the Wall Street Journal (WSJ) portion of Penn Treebank (PTB) (Marcus et al., 1993), which contains 45 different POS tags. In order to compare with previous work, we adopt the standard splits — section 0–18 as training data, section 19– 21 as development data and section 22–24 as test data (Manning, 2011; Søgaard, 2011). NER. For NER, We perform experiments on the English data from CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003). This data set contains four different types of named entities: PERSON, LOCATION, ORGANIZATION, and MISC. We use the BIOES tagging scheme instead of standard BIO2, as previous studies have reported meaningful improvement with this scheme (Ratinov and Roth, 2009; Dai et al., 2015; Lample et al., 2016). The corpora statistics are shown in Table 2. We did not perform any pre-processing for data sets, leaving our system truly end-to-end. 4.2 Main Results We first run experiments to dissect the effectiveness of each component (layer) of our neural network architecture by ablation studies. We compare the performance with three baseline systems — BRNN, the bi-direction RNN; BLSTM, the bidirection LSTM, and BLSTM-CNNs, the combination of BLSTM with CNN to model characterlevel information. All these models are run using Stanford’s GloVe 100 dimensional word embeddings and the same hyper-parameters as shown in Table 1. According to the results shown in Table 3, BLSTM obtains better performance than BRNN on all evaluation metrics of both the two tasks. BLSTM-CNN models significantly outperform the BLSTM model, showing that characterlevel representations are important for linguistic sequence labeling tasks. This is consistent with 1068 POS NER Dev Test Dev Test Model Acc. Acc. Prec. Recall F1 Prec. Recall F1 BRNN 96.56 96.76 92.04 89.13 90.56 87.05 83.88 85.44 BLSTM 96.88 96.93 92.31 90.85 91.57 87.77 86.23 87.00 BLSTM-CNN 97.34 97.33 92.52 93.64 93.07 88.53 90.21 89.36 BRNN-CNN-CRF 97.46 97.55 94.85 94.63 94.74 91.35 91.06 91.21 Table 3: Performance of our model on both the development and test sets of the two tasks, together with three baseline systems. Model Acc. Gim´enez and M`arquez (2004) 97.16 Toutanova et al. (2003) 97.27 Manning (2011) 97.28 Collobert et al. (2011)‡ 97.29 Santos and Zadrozny (2014)‡ 97.32 Shen et al. (2007) 97.33 Sun (2014) 97.36 Søgaard (2011) 97.50 This paper 97.55 Table 4: POS tagging accuracy of our model on test data from WSJ proportion of PTB, together with top-performance systems. The neural network based models are marked with ‡. results reported by previous work (Santos and Zadrozny, 2014; Chiu and Nichols, 2015). Finally, by adding CRF layer for joint decoding we achieve significant improvements over BLSTMCNN models for both POS tagging and NER on all metrics. This demonstrates that jointly decoding label sequences can significantly benefit the final performance of neural network models. 4.3 Comparison with Previous Work 4.3.1 POS Tagging Table 4 illustrates the results of our model for POS tagging, together with seven previous topperformance systems for comparison. Our model significantly outperform Senna (Collobert et al., 2011), which is a feed-forward neural network model using capitalization and discrete suffix features, and data pre-processing. Moreover, our model achieves 0.23% improvements on accuracy over the “CharWNN” (Santos and Zadrozny, 2014), which is a neural network model based on Senna and also uses CNNs to model characterlevel representations. This demonstrates the effectiveness of BLSTM for modeling sequential data Model F1 Chieu and Ng (2002) 88.31 Florian et al. (2003) 88.76 Ando and Zhang (2005) 89.31 Collobert et al. (2011)‡ 89.59 Huang et al. (2015)‡ 90.10 Chiu and Nichols (2015)‡ 90.77 Ratinov and Roth (2009) 90.80 Lin and Wu (2009) 90.90 Passos et al. (2014) 90.90 Lample et al. (2016)‡ 90.94 Luo et al. (2015) 91.20 This paper 91.21 Table 5: NER F1 score of our model on test data set from CoNLL-2003. For the purpose of comparison, we also list F1 scores of previous topperformance systems. ‡ marks the neural models. and the importance of joint decoding with structured prediction model. Comparing with traditional statistical models, our system achieves state-of-the-art accuracy, obtaining 0.05% improvement over the previously best reported results by Søgaard (2011). It should be noted that Huang et al. (2015) also evaluated their BLSTM-CRF model for POS tagging on WSJ corpus. But they used a different splitting of the training/dev/test data sets. Thus, their results are not directly comparable with ours. 4.3.2 NER Table 5 shows the F1 scores of previous models for NER on the test data set from CoNLL-2003 shared task. For the purpose of comparison, we list their results together with ours. Similar to the observations of POS tagging, our model achieves significant improvements over Senna and the other three neural models, namely the LSTM-CRF proposed by Huang et al. (2015), LSTM-CNNs pro1069 Embedding Dimension POS NER Random 100 97.13 80.76 Senna 50 97.44 90.28 Word2Vec 300 97.40 84.91 GloVe 100 97.55 91.21 Table 6: Results with different choices of word embeddings on the two tasks (accuracy for POS tagging and F1 for NER). posed by Chiu and Nichols (2015), and the LSTMCRF by Lample et al. (2016). Huang et al. (2015) utilized discrete spelling, POS and context features, Chiu and Nichols (2015) used charactertype, capitalization, and lexicon features, and all the three model used some task-specific data preprocessing, while our model does not require any carefully designed features or data pre-processing. We have to point out that the result (90.77%) reported by Chiu and Nichols (2015) is incomparable with ours, because their final model was trained on the combination of the training and development data sets4. To our knowledge, the previous best F1 score (91.20)5 reported on CoNLL 2003 data set is by the joint NER and entity linking model (Luo et al., 2015). This model used many hand-crafted features including stemming and spelling features, POS and chunks tags, WordNet clusters, Brown Clusters, as well as external knowledge bases such as Freebase and Wikipedia. Our end-to-end model slightly improves this model by 0.01%, yielding a state-of-the-art performance. 4.4 Word Embeddings As mentioned in Section 3.1, in order to test the importance of pretrained word embeddings, we performed experiments with different sets of publicly published word embeddings, as well as a random sampling method, to initialize our model. Table 6 gives the performance of three different word embeddings, as well as the randomly sampled one. According to the results in Table 6, models using pretrained word embeddings obtain a significant improvement as opposed to the ones using random embeddings. Comparing the two tasks, NER relies 4We run experiments using the same setting and get 91.37% F1 score. 5Numbers are taken from the Table 3 of the original paper (Luo et al., 2015). While there is clearly inconsistency among the precision (91.5%), recall (91.4%) and F1 scores (91.2%), it is unclear in which way they are incorrect. POS NER Train Dev Test Train Dev Test No 98.46 97.06 97.11 99.97 93.51 89.25 Yes 97.86 97.46 97.55 99.63 94.74 91.21 Table 7: Results with and without dropout on two tasks (accuracy for POS tagging and F1 for NER). POS NER Dev Test Dev Test IV 127,247 125,826 4,616 3,773 OOTV 2,960 2,412 1,087 1,597 OOEV 659 588 44 8 OOBV 902 828 195 270 Table 8: Statistics of the partition on each corpus. It lists the number of tokens of each subset for POS tagging and the number of entities for NER. more heavily on pretrained embeddings than POS tagging. This is consistent with results reported by previous work (Collobert et al., 2011; Huang et al., 2015; Chiu and Nichols, 2015). For different pretrained embeddings, Stanford’s GloVe 100 dimensional embeddings achieve best results on both tasks, about 0.1% better on POS accuracy and 0.9% better on NER F1 score than the Senna 50 dimensional one. This is different from the results reported by Chiu and Nichols (2015), where Senna achieved slightly better performance on NER than other embeddings. Google’s Word2Vec 300 dimensional embeddings obtain similar performance with Senna on POS tagging, still slightly behind GloVe. But for NER, the performance on Word2Vec is far behind GloVe and Senna. One possible reason that Word2Vec is not as good as the other two embeddings on NER is because of vocabulary mismatch — Word2Vec embeddings were trained in casesensitive manner, excluding many common symbols such as punctuations and digits. Since we do not use any data pre-processing to deal with such common symbols or rare words, it might be an issue for using Word2Vec. 4.5 Effect of Dropout Table 7 compares the results with and without dropout layers for each data set. All other hyperparameters remain the same as in Table 1. We observe a essential improvement for both the two tasks. It demonstrates the effectiveness of dropout in reducing overfitting. 1070 POS Dev Test IV OOTV OOEV OOBV IV OOTV OOEV OOBV LSTM-CNN 97.57 93.75 90.29 80.27 97.55 93.45 90.14 80.07 LSTM-CNN-CRF 97.68 93.65 91.05 82.71 97.77 93.16 90.65 82.49 NER Dev Test IV OOTV OOEV OOBV IV OOTV OOEV OOBV LSTM-CNN 94.83 87.28 96.55 82.90 90.07 89.45 100.00 78.44 LSTM-CNN-CRF 96.49 88.63 97.67 86.91 92.14 90.73 100.00 80.60 Table 9: Comparison of performance on different subsets of words (accuracy for POS and F1 for NER). 4.6 OOV Error Analysis To better understand the behavior of our model, we perform error analysis on Out-of-Vocabulary words (OOV). Specifically, we partition each data set into four subsets — in-vocabulary words (IV), out-of-training-vocabulary words (OOTV), out-of-embedding-vocabulary words (OOEV) and out-of-both-vocabulary words (OOBV). A word is considered IV if it appears in both the training and embedding vocabulary, while OOBV if neither. OOTV words are the ones do not appear in training set but in embedding vocabulary, while OOEV are the ones do not appear in embedding vocabulary but in training set. For NER, an entity is considered as OOBV if there exists at lease one word not in training set and at least one word not in embedding vocabulary, and the other three subsets can be done in similar manner. Table 8 informs the statistics of the partition on each corpus. The embedding we used is Stanford’s GloVe with dimension 100, the same as Section 4.2. Table 9 illustrates the performance of our model on different subsets of words, together with the baseline LSTM-CNN model for comparison. The largest improvements appear on the OOBV subsets of both the two corpora. This demonstrates that by adding CRF for joint decoding, our model is more powerful on words that are out of both the training and embedding sets. 5 Related Work In recent years, several different neural network architectures have been proposed and successfully applied to linguistic sequence labeling such as POS tagging, chunking and NER. Among these neural architectures, the three approaches most similar to our model are the BLSTM-CRF model proposed by Huang et al. (2015), the LSTMCNNs model by Chiu and Nichols (2015) and the BLSTM-CRF by Lample et al. (2016). Huang et al. (2015) used BLSTM for word-level representations and CRF for jointly label decoding, which is similar to our model. But there are two main differences between their model and ours. First, they did not employ CNNs to model character-level information. Second, they combined their neural network model with handcrafted features to improve their performance, making their model not an end-to-end system. Chiu and Nichols (2015) proposed a hybrid of BLSTM and CNNs to model both character- and word-level representations, which is similar to the first two layers in our model. They evaluated their model on NER and achieved competitive performance. Our model mainly differ from this model by using CRF for joint decoding. Moreover, their model is not truly end-to-end, either, as it utilizes external knowledge such as character-type, capitalization and lexicon features, and some data preprocessing specifically for NER (e.g. replacing all sequences of digits 0-9 with a single “0”). Recently, Lample et al. (2016) proposed a BLSTMCRF model for NER, which utilized BLSTM to model both the character- and word-level information, and use data pre-processing the same as Chiu and Nichols (2015). Instead, we use CNN to model character-level information, achieving better NER performance without using any data preprocessing. There are several other neural networks previously proposed for sequence labeling. Labeau et al. (2015) proposed a RNN-CNNs model for German POS tagging. This model is similar to the LSTM-CNNs model in Chiu and Nichols (2015), with the difference of using vanila RNN instead of LSTM. Another neural architecture employing 1071 CNN to model character-level information is the “CharWNN” architecture (Santos and Zadrozny, 2014) which is inspired by the feed-forward network (Collobert et al., 2011). CharWNN obtained near state-of-the-art accuracy on English POS tagging (see Section 4.3 for details). A similar model has also been applied to Spanish and Portuguese NER (dos Santos et al., 2015) Ling et al. (2015) and Yang et al. (2016) also used BSLTM to compose character embeddings to word’s representation, which is similar to Lample et al. (2016). Peng and Dredze (2016) Improved NER for Chinese Social Media with Word Segmentation. 6 Conclusion In this paper, we proposed a neural network architecture for sequence labeling. It is a truly end-toend model relying on no task-specific resources, feature engineering or data pre-processing. We achieved state-of-the-art performance on two linguistic sequence labeling tasks, comparing with previously state-of-the-art systems. There are several potential directions for future work. First, our model can be further improved by exploring multi-task learning approaches to combine more useful and correlated information. For example, we can jointly train a neural network model with both the POS and NER tags to improve the intermediate representations learned in our network. Another interesting direction is to apply our model to data from other domains such as social media (Twitter and Weibo). Since our model does not require any domain- or taskspecific knowledge, it might be effortless to apply it to these domains. Acknowledgements This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817–1853. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157–166. James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy), volume 4, page 3. Austin, TX. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP-2014, pages 740– 750, Doha, Qatar, October. Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In Proceedings of CoNLL-2003, pages 1–7. Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. Syntax, Semantics and Structure in Statistical Translation, page 103. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. Hong-Jie Dai, Po-Ting Lai, Yung-Chun Chang, and Richard Tzong-Han Tsai. 2015. Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization. Journal of cheminformatics, 7(S1):1–10. Yann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. 2015. Rmsprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390. Cıcero dos Santos, Victor Guimaraes, RJ Niter´oi, and Rio de Janeiro. 2015. Boosting named entity recognition with neural character embeddings. In Proceedings of NEWS 2015 The Fifth Named Entities Workshop, page 25. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL-2015 (Volume 1: Long Papers), pages 334–343, Beijing, China, July. Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through classifier combination. In Proceedings of HLT-NAACL-2003, pages 168–171. 1072 Felix A Gers, J¨urgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451–2471. Felix A Gers, Nicol N Schraudolph, and J¨urgen Schmidhuber. 2003. Learning precise timing with lstm recurrent networks. The Journal of Machine Learning Research, 3:115–143. Rich Caruana Steve Lawrence Lee Giles. 2001. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Advances in Neural Information Processing Systems 13: Proceedings of the 2000 Conference, volume 13, page 402. MIT Press. Jes´us Gim´enez and Llu´ıs M`arquez. 2004. Svmtool: A general pos tagger generator based on support vector machines. In In Proceedings of LREC-2004. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pages 249–256. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In Neural Networks, 1996., IEEE International Conference on, volume 1, pages 347–352. IEEE. Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of ICASSP2013, pages 6645–6649. IEEE. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pages 1026–1034. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard H. Hovy, and Eric P. Xing. 2016. Harnessing deep neural networks with logic rules. In Proceedings of ACL-2016, Berlin, Germany, August. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342–2350. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of ACL2010, pages 1–11, Uppsala, Sweden, July. Matthieu Labeau, Kevin L¨oser, Alexandre Allauzen, and Rue John von Neumann. 2015. Non-lexical neural architecture for fine-grained pos tagging. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 232–237. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML-2001, volume 951, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-2016, San Diego, California, USA, June. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551. Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of ACL2009, pages 1030–1038. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of EMNLP-2015, pages 1520–1530, Lisbon, Portugal, September. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of EMNLP-2015, pages 879–888, Lisbon, Portugal, September. Xuezhe Ma and Eduard Hovy. 2015. Efficient innerto-outer greedy algorithm for higher-order labeled dependency parsing. In Proceedings of the EMNLP2015, pages 1322–1328, Lisbon, Portugal, September. Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of ACL-2014, pages 1337–1348, Baltimore, Maryland, June. Xuezhe Ma and Hai Zhao. 2012a. Fourth-order dependency parsing. In Proceedings of COLING 2012: Posters, pages 785–796, Mumbai, India, December. Xuezhe Ma and Hai Zhao. 2012b. Probabilistic models for high-order projective dependency parsing. Technical Report, arXiv:1502.04174. 1073 Xuezhe Ma, Zhengzhong Liu, and Eduard Hovy. 2016. Unsupervised ranking model for entity coreference resolution. In Proceedings of NAACL-2016, San Diego, California, USA, June. Christopher D Manning. 2011. Part-of-speech tagging from 97% to 100%: is it time for some linguistics? In Computational Linguistics and Intelligent Text Processing, pages 171–189. Springer. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313–330. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL-2005, pages 91–98, Ann Arbor, Michigan, USA, June 2530. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of ACL-2010, pages 1396–1411, Uppsala, Sweden, July. Association for Computational Linguistics. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING-2004, pages 64–70, Geneva, Switzerland, August 23-27. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of CoNLL2014, pages 78–86, Ann Arbor, Michigan, June. Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of EMNLP2015, pages 548–554, Lisbon, Portugal, September. Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In Proceedings of ACL-2016, Berlin, Germany, August. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP-2014, pages 1532–1543, Doha, Qatar, October. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of CoNLL-2009, pages 147–155. Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of ICML-2014, pages 1818–1826. Libin Shen, Giorgio Satta, and Aravind Joshi. 2007. Guided learning for bidirectional sequence classification. In Proceedings of ACL-2007, volume 7, pages 760–767. Anders Søgaard. 2011. Semi-supervised condensed nearest neighbor for part-of-speech tagging. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 48–52, Portland, Oregon, USA, June. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Xu Sun. 2014. Structure regularization for structured prediction. In Advances in Neural Information Processing Systems, pages 2402–2410. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL-2003 - Volume 4, pages 142– 147, Stroudsburg, PA, USA. Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Representing text chunks. In Proceedings of EACL’99, pages 173–179. Bergen, Norway. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich partof-speech tagging with a cyclic dependency network. In Proceedings of NAACL-HLT-2003, Volume 1, pages 173–180. Zhilin Yang, Ruslan Salakhutdinov, and William Cohen. 2016. Multi-task cross-lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. 1074
2016
101
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1075–1084, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Off-topic Response Detection for Spontaneous Spoken English Assessment Andrey Malinin, Rogier C. Van Dalen, Yu Wang, Kate M. Knill, Mark J. F. Gales University of Cambridge, Department of Engineering Trumpington St, Cambridge CB2 1PZ, UK {am969, yw396, kate.knill, mjfg}@eng.cam.ac.uk Abstract Automatic spoken language assessment systems are becoming increasingly important to meet the demand for English second language learning. This is a challenging task due to the high error rates of, even state-of-the-art, non-native speech recognition. Consequently current systems primarily assess fluency and pronunciation. However, content assessment is essential for full automation. As a first stage it is important to judge whether the speaker responds on topic to test questions designed to elicit spontaneous speech. Standard approaches to off-topic response detection assess similarity between the response and question based on bag-of-words representations. An alternative framework based on Recurrent Neural Network Language Models (RNNLM) is proposed in this paper. The RNNLM is adapted to the topic of each test question. It learns to associate example responses to questions with points in a topic space constructed using these example responses. Classification is done by ranking the topic-conditional posterior probabilities of a response. The RNNLMs associate a broad range of responses with each topic, incorporate sequence information and scale better with additional training data, unlike standard methods. On experiments conducted on data from the Business Language Testing Service (BULATS) this approach outperforms standard approaches. 1 Introduction As English has become the global lingua franca, there is growing demand worldwide for assessment of English as a second language (Seidlhofer, 2005). To assess spoken communication, spontaneous speech is typically elicited through a series of questions such as ’describe the photo’ or ’plan a meeting’. Grades are awarded based on a candidate’s responses. Automatic assessment systems are becoming attractive as they allow second language assessment programmes to economically scale their operations while decreasing throughput time and provide testing on demand. Features for automatic graders are derived from the audio and from hypotheses produced by automatic speech recognition (ASR) systems. The latter is highly errorful due to the large variability in the input speech; disfluencies common to spontaneous speech, nonnative accents and pronunciations. Current systems, such as ETS’ SpeechRater (Zechner et al., 2009) and Pearson’s AZELLA (Metallinou and Cheng, 2014), primarily assess pronunciation and fluency. Although these are clearly indicative of spoken language ability, full assessment of spoken communication requires judgement of highlevel content and communication skills, such as response construction and relevance. The first stage of this is to assess whether the responses are offtopic, that is, has the candidate misunderstood the question and/or memorised a response. While there has been little work done on detecting off-topic responses for spoken language assessment, detection of off-topic responses and content assessment has been studied for essay assessment. One approach for essay content assessment uses features based on semantic similarity metrics between vector space representations of responses. Common vector representations include lexical Vector Space Models and Latent Semantic Analysis (LSA) (Yannakoudakis, 2013). This approach was first applied to spoken assess1075 ment in (Xie et al., 2012) and then in (Evanini et al., 2013). Following this, (Yoon and Xie, 2014) investigated the detection of responses for which an automatic assessment system will have difficulty in assigning a valid score, of which offtopic responses are a specific type. A decision tree classifier is used with features based on cosine similarity between a test response and tf-idf vectors of both aggregate example responses and questions, as well as pronunciation and fluency. In (Evanini and Wang, 2014) text reuse and plagiarism in spoken responses are detected using a decision tree classifier based on vector similarity and lexical matching features which compare a response to a set of example ’source texts’ . This task is similar to off-topic response detection in that it is based on comparing a test response to example responses. Thus, a standard approach to off-topic response detection would be based on measuring the similarity between vector representations of a spoken response and the test question. A major deficiency of this approach is that it is based on bag-of-words vector representations, which loses information about the sequential nature of speech, which is important to evaluating response construction and relevance. Additionally, adapting the approach to model a range of responses for each topic causes classification time to scale poorly with training data size and the number of questions. To address these issues a general off-topic content detection framework based on topic adapted Recurrent Neural Network language models (RNNLM) has been developed and applied to off-topic response detection for spoken language assessment. This framework uses example responses to test questions in training of the language model and construction of the topic-space. The RNNLM learns to associate the example responses with points in the topic-space. Classification is done by ranking the topic-conditional posterior probabilities of a response. The advantage of this approach is that sequence information can be taken into account and broad ranges of responses can be associated with each topic without affecting classifcation speed. Two topic vector representations are investigated: Latent Dirichlet Allocation (LDA) (Blei et al., 2003; Griffiths and Steyvers, 2004) and Latent Semantic Analysis (LSA) (Landauer et al., 1998). They are compared to standard approaches on data from the Cambridge Business English (BULATS) exam. The rest of this paper is structured as follows: Section 2 discusses the RNNLM adaptation and topic spaces; Section 3 discusses approaches to topic detection; Section 4 presents data sets and experimental infrastructure; Section 5 analyzes experimental results; Section 6 concludes the paper. 2 Topic Adapted RNNLMs 2.1 RNNLM Architecture A statistical language model is used to model the semantic and syntactic information in text in the form of a probability distribution over word sequences. It assigns a probability to a word sequence w = {w0, w1, · · · , wL} as follows: P(wi|wi−1, · · · , w0) = P(wi|hi−1 0 ) P(w) = L Y i=1 P(wi|hi−1 0 ) (1) (2) where w0 is the start of sentence symbol <s>. In this work a language model is trained to model example responses to questions on a spoken language assessment test. P(wi|hi−1 0 ) can be estimated by a number of approaches, most notably N-grams and Recurrent Neural Networks (Mikolov et al., 2010). Recurrent Neural Network language models (RNNLMs) (Figure 1) (Mikolov, 2012) are a variable context length language model, capable of representing the entire context efficiently, unlike N-grams. RNNLMs represent the full untruncated history hi−1 0 = {wi−1, · · · , w0} for word wi as the hidden layer si−1, a form of shortterm memory, whose representation is learned from the data. Words and phrases are represented in a continuous space, which gives RNNLMs greater generalization abilities than other language models, such as N-grams. RNNLMs can be adapted by adding a feature vector f which represents information absent from the RNN (Mikolov and Zweig, 2012). In this work, the vector representation of a spoken language test question topic fq is used for the context vector f. Architecturally, a context adapted RNNLM is described by equations 3-5. e(x) and g(x) are element-wise sigmoid and softmax acti1076 Figure 1: Context adapted RNN language model vation functions. P(wi|hi−1 0 , f) = PRNN(wi|wi−1, si−2, f) PRNN(wi|wi−1, si−2, f) = g(Vsi−1 + Hf) si−1 = e(Uwi−1 + Wsi−2 + Gf) (3) (4) (5) Through the process of adaptation the RNNLM learns to associate particular types of responses with particular topics, thereby becoming more discriminative. Thus, a sentence’s topic-conditional probability PRNN(w|fq) will be higher if it corresponds to the topic q than if it does not. 2.2 Example Response Based Topic Space In order for the topic vectors fq to be informative they must span the space of all question topics in the test. Thus a topic space needs to be defined. Example responses, which are necessary to train the RNNLM, are used to define a topic space because typical responses to a question will be definitive of the question’s topic. Multiple example responses to a particular question are merged into one aggregate response to capture a broad range of response variations and increase the robustness of the vector representation estimation. By default a topic t is defined for each question q. However, multi-part questions are common, where candidates are given a scenario such as providing tourist information in which individual questions ask about food, hotels or sights. Since the underlying topic is related this can confuse a classifier. The responses for all these related questions could be merged to form a single aggregate vector, but the statistics of the responses to each question can be sufficiently different that less distinct topics are formed. Instead the aggregrate example responses for each question are assigned the same topic label. Thus, a mapping between questions and topics and its inverse is introduced: M : q →t M−1 t : {q ∈Q|M(q) = t} (6) (7) A vector representation of a question topic is computed using the aggregate example responses. As mentioned in Section 1, two common representations are LDA and LSA; both are investigated in this work. LDA is a generative model which allows documents to be modelled as distributions over latent topics z ∈Z. Each latent topic z is described by a multinomial distribution over words P(wi|z), and each word in a document is attributed to a particular latent topic (Blei et al., 2003). Thus, the adaptation vector fw represents a vector of posterior probabilities over latent topics for word sequence w: fw = [P(z = 1|w), · · · , P(z = K|w)]T P(z = k|w) = PN i=1 δ(zwi = k) N (8) (9) LDA was found to perform better for RNNLM adaptation than other representations in (Mikolov and Zweig, 2012; Chen et al., 2015). LSA (Landauer et al., 1998) is a popular representation for information retrieval tasks. A worddocument matrix F is constructed using example responses and each word is weighted by its term frequency-inverse document frequency (TF-IDF). Then a low-rank approximation Fk is computed using Singular Value Decomposition (SVD): Fk = UkΣkVT k Fk = [f1, · · · , fQ]T fw = Σ−1 k UT k ftfidf (10) (11) (12) Only the k largest singular values of the singular value matrix Σ are kept. Fk is a representation of the data which retains only the k most significant factors of variation. Each row is a topic vector fq. New vectors fw for test responses can be projected into the LSA space via equation 12, where ftfidf is the TF-IDF weighted bag-of-word representation of a test response. 1077 3 Topic Detection and Features This section discusses the standard, vector similarity feature-based, and the proposed topic adapted RNNLM approaches for topic detection. 3.1 Vector Similarity Features The essence of the standard approach to topic detection is to assess the semantic distance Dsem between the test response w and the aggregate example response wq by approximating it using a vector distance metric Dvec between vector representations of the response fw and the topic fq. Classification is done by selecting the topic closest to the test response: ˆtw = M arg min q {Dsem(w, wq)}  Dsem(w, wq) ≈Dvec(fw, fq) (13) (14) The selection of an appropriate distance metric Dvec(fw, fq) can have a large effect on the classification outcome. A common metric used in topic classification and information retrieval is cosine similarity, which measures the cosine of the angle between two vectors. A distance metric based on this, cosine distance, can be defined as: Dcos(fw, fq) = 1 −fwTfq |fw||fq| (15) While topics are robustly defined, this approach fails to capture the range of responses which can be given to a question. A different approach would be to maintain a separate point in this topic space for every example response. This retains the robust topic definition while allowing each topic to be represented by a cloud of points in topic space, thereby capturing a range of responses which can be given. A K-nearest neighbour (KNN) classifier can be used to detect the response topic by computing distances of the test response to each of the training points in topic space. However, classification may become impractical for large data sets, as the number of response points scales with the size of the training data. Low-cost distance measures, such as cosine distance, allow this approach to be used on large data sets before it becomes computationally infeasible. This approach is used as the baseline for comparison with the proposed RNNLM based method. For multi-part questions, topic vectors relating to the same overall topic are simply given the same topic label. The classification rate can be improved by taking the top N ˆtN = {ˆt1, · · · , ˆtN} results into account. The KNN classifier can be modified to yield the N-best classification by removing all training points from the 1-best class from the KNN classifier and re-running the classification to get the 2-best results, and so on. One of the main deficiencies of methods based on computing distances between vector representations is that commonly used representations, such as LSA and LDA, ignore word-order in documents, thereby throwing away information which is potentially useful for topic classification. Additionally, if any of the test or example response utterances are short, then their topic vector representations may not be robustly estimated. 3.2 Topic Adapted RNNLM Framework The RNNLM based approach to topic detection is based on different principles. By combining equations 2 and 3 the log-probability L(q) of a response sentence given a particular topic vector PRNN(w|fq) is computed. For each response w in the test set L(q) is computed (equation 16) for all topic vectors fq. L(q) is calculated using equation 17 for multi-part questions with responses wp where p ∈t. Classification is done by ranking log-probability L(q) for an utterance w and L(q) for all q ∈M−1 t are averaged (equation 18). L(q) =      log[PRNN(w|fq)] X p 1 Np log[PRNN(wp|fq)] (16) (17) ˆtw = arg max t { 1 |M−1 t | X q∈M−1 t L(q)} (18) It is trivial to extend this approach to yield the Nbest solutions by simply taking the top N outputs of equation 18. The RNNLM approach has several benefits over standard approaches. Firstly, this approach explicitly takes account of word-order in determining the topical similarity of the response. Secondly, there is no need to explicitly select a distance metric. Thirdly, the problems of robustly estimating a vector representation fw of the test response are sidestepped. Furthermore, the RNNLM accounts for a broad range of responses because it is trained on individual response utterances which it associates with a question topic vector. This makes 1078 it more scalable than the KNN approach because the number of comparisons which need to be made scales only with the number of questions, not the size of the training data. Thus, arbitrarily large data sets can be used to train the model without affecting classification time. The RNNLM could be used in a KNN-style approach, where it associates each example response with its individual topic vector, using L(q) as a distance metric. However, this is computationally infeasible since computing L(q) is significantly more expensive than cosine distance and the previously mentioned scalability would be lost. 4 Data and Experimental Set-up Data from the Business Language Testing Service (BULATS) English tests is used for training and testing. At test time, each response is recognised using an ASR system and the 1-best hypothesis is passed to the topic classifier. The topic detection system decides whether the candidate has spoken off topic by comparing the classifier output to the topic of the question being answered. 4.1 BULATS Test Format and Data The BULATS Online Speaking Test has five sections (Chambers and Ingham, 2011): A Candidates respond to eight simple questions about themselves and their work (e.g. what is your name, where do you come from?). B Candidates read aloud six short texts appropriate to a work environment. C Candidates talk about a work-related topic (e.g. the perfect office) with the help of prompts which appear on the screen. D Candidates must describe a graph or chart such as a pie or a bar chart related to a business situation (e.g. company exports). E Candidates are asked to respond to five openended questions related to a single context prompt. For example a set of five questions about organizing a stall at a trade fair. Candidates are given a test consisting of 21 questions, however, only the last three sections, consisting of 7 questions, are spontaneously constructed responses to open ended question, and therefore of relevance to this work. Each unique set of 7 questions is a question script. Training, development and evaluation data sets composed of predominantly Gujarati L1 candidates are used in these experiments. The data sets are designed to have an (approximately) even distribution over grades as well as over the different question scripts. During operation the system will detect offtopic responses based on ASR transcriptions, so for the system to be matched it needs to be trained on ASR transcriptions as well. Thus, two training sets are made by using the ASR architecture described in section 4.2 to transcribe candidate responses. Each training set covers the same set of 282 unique topics. The first training set consists of data from 490 candidates, containing 9.9K responses, with an average of 35.1 responses per topic. The second, much larger, training set consists of data from 10004 candidates, containing 202K responses, with an average of 715.5 responses per topic. Characteristic Section A B C D E # Unique Topics 18 144 17 18 85 # Questions/Section 6 8 1 1 5 Av. # Words/Resp. 10 10 61 77 20 Table 1: Data Characteristics. As Table 1 shows, the average response length varies across sections due to the nature of the sections. Shorter responses to questions are observed for sections A, B and E, with longer responses to C and D. Estimating topic representations for sections A, B and E questions based on individual responses would be problematic due to the short response lengths. However, by aggregating example responses across candidates, as described in section 2.2, the average length of responses in all sections is significantly longer, allowing the exampleresponse topic space to be robustly defined. Section E topics correspond to topics of subquestions relating to an overall question, thus there are only 15 unique questions in section E. However, the sub-questions are sufficiently distinct to merit their own topic vectors. At classification time confusions between sub-questions of an overall section E question are not considered mistakes. Held-out test sets are used for development, DEV, and evaluation, EVAL, composed of 84 and 223 candidates, respectively. ASR transcriptions are used for these test sets, as per the operating 1079 scenario. A version of the DEV set with professionally produced transcriptions, DEV REF, is also used in training and development. The publicly available Gibbs-LDA toolkit (Phan and Nguyen, 2007) is used to estimate LDA posterior topic vectors and the scikit-learn 17.0 toolkit (Pedregosa et al., 2011) to estimate LSA topic representations. The topic adapted RNNLM uses a 100-dimensional hidden layer. DEV REF is used as a validation set for early stopping to prevent over-fitting. The CUED RNNLM toolkit v0.1 (Chen et al., 2016) is used for RNNLM training, details of which can be found in (Chen et al., 2014; Mikolov et al., 2010) 4.2 ASR System A hybrid deep neural network DNN-HMM system is used for ASR (Wang et al., 2015). The acoustic models are trained on 108.6 hours of BULATS test data (Gujarati L1 candidates) using an extended version of the HTK v3.4.1 toolkit (Young et al., 2009; Zhang and Woodland, 2015). A Kneser-Ney trigram LM is trained on 186K words of BULATs test data and interpolated with a general English LM trained on a large broadcast news corpus, using the SRILM toolkit (Stolcke, 2002). Lattices are re-scored with an interpolated trigram+RNN LM (Mikolov et al., 2010) by applying the 4gram history approximation described in (Liu et al., 2014), where the RNNLM is trained using the CUED RNNLM toolkit (Chen et al., 2016). Interpolation weights are optimized on the DEV REF data set. Table 2 shows the word error rate (WER) on the DEV test set relative to the DEV REF references for each section and the combined spontaneous speech sections (C-E). % WER A B C D E C-E 30.6 23.2 32.0 29.9 32.3 31.5 Table 2: ASR performance on DEV. 5 Experiments Two forms of experiment are conducted in order to assess the performance of the topic-adapted RNNLM. First, a topic classification experiment is run where the ability of the system to accurately recognize the topic of a response is evaluated. Second, a closed-set off-topic response detection experiment is done. In the experimental configuration used here a response is classified into a topic and the accuracy is measured. The topic of the question being answered is known and all responses are actually ontopic. A label (on-topic/off-topic) is given for each response based on the output of the classifier relative to the question topic. Thus, results presented are in terms of false rejection (FR) and false acceptance (FA) rates rather than precision and recall. Initial topic detection experiments were run using the DEV test set with both the reference transcriptions (REF) and recognition hypotheses (ASR) to compare different KNN and RNN systems. After this, performance is evaluated on the EVAL test set. The systems were trained using data sets of 490 and 10004 candidates, as described in section 4.1. 5.1 Topic Classification Performance of the topic-adapted RNNLM is compared to the KNN classifier in Table 3. The RNN1 system outperforms the KNN system by 20-35 % using the LDA topic representation. Furthermore, the KNN system performs worse on section E than it does on section C, while RNN1 performance is better on section E by 7-10% than on section C. The LSA topic representation consistently yields much better performance than LDA by 25-50% for both systems. Thus, the LDA representation is not further investigated in any experiments. When using the LSA representation the RNN1 system outperforms the KNN system only marginally, due to better performance on section E. Additionally, unlike the KNN-LDA system, the KNN-LSA system does not have a performance degradation on section E relative to section C. Notably, the RNN1 system performs better on section E by 5-13% than on section C. Clearly, section C questions are hardest to assess. Combining both representations through concatenation does not effect performance of the KNN system and slightly degrades RNN1 performance on section C. KNN and RNN1 systems with either topic representation perform comparably on REF and ASR. This suggests that the systems are quite robust to WER rates of 31.5% and the differences are mostly noise. Training the RNN2 system on 20 times as much data leads to greatly improved performance over KNN and RNN1 systems, almost halving the over1080 Topic System # Cands. C D E ALL (C-E) Repn. REF ASR REF ASR REF ASR REF ASR LDA KNN 490 75.0 81.0 37.0 42.0 91.8 91.1 68.0 71.4 RNN1 61.9 58.3 28.4 25.9 48.8 51.2 46.6 45.4 LSA KNN 490 32.1 28.6 2.5 3.7 31.3 33.3 22.0 21.9 RNN1 29.8 31.0 4.9 6.2 23.8 23.8 19.7 20.5 RNN2 10004 19.0 19.0 3.7 3.7 9.5 10.7 10.8 11.2 LDA KNN 490 30.9 29.8 2.5 3.7 31.5 33.3 21.7 22.3 +LSA RNN1 32.1 35.7 4.9 4.9 23.8 22.6 20.5 21.3 RNN2 10004 25.0 22.6 4.9 4.9 10.7 10.7 13.7 12.9 Table 3: % False rejection in topic detection using KNN classifier with 6 nearest neighbour and distance weights and RNNLM classifier on the DEV test set. 280 dim. topic spaces for LDA and LSA, and 560 dim. for LDA+LSA. all error rate. The KNN system could not be evaluated effectively in reasonable time using 20 times as many example responses and results are not shown, while RNN2 evaluation times are unaffected. Notably, RNN performance using the LSA representation scales with training data size better than with the LDA+LSA representation. Thus, we further investigate systems only with the LSA representation. Interestingly, section D performance is improved only marginally. Performance on section D is always best, as section D questions relate to discussion of charts and graphs for different conditions for which the vocabulary is very specific. Section C and E questions are the less distinct because they have freeform answers to broad questions, leading to higher response variability. This makes the linking of topic from the training data to the test data more challenging, particularly for 1-best classification, leading to higher error rates. Figure 2 shows the topic classification confusion matrix for the RNN1 LSA system. A similar matrix is observed for the KNN LSA system. Most confusions are with topics from the same section. This is because each section has a distinct style of questions and some questions within a section are similar. An example is shown below. Question SC-EX1 relates to personal local events in the workplace. SC-EX2, which relates to similar issues, is often confused with it. On the other hand, SC-EX3 is rarely confused with SC-EX1 as it is about non-personal events on a larger scale. • SC-EX1: Talk about some advice from a colleague. You should say: what the advice was, how it helped you and whether you would give the same advice to another colleague. • SC-EX2: Talk about a socially challenging day you had at work. You should say: what was the challenging situation, how you resolved it and why you found it challenging. • SC-EX3: Talk about a company in your local town which you admire. You should say: what company it is, what they do, why you admire them, and how the company impacts life in your town. Figure 2: RNN1 LSA confusion matrix on DEV ASR. System performance can be increased by considering N-best results, as described in Section 3. Results for systems trained on 490 and 10004 candidates are presented in Table 4. The error rate decreases as the value of N increases for all systems. However, performance scales better with N for the RNN systems than for KNN. Notably, for values of N > 1 performance of all systems on REF is better, which suggests that ASR errors do have a minor impact on system performance. 5.2 Off-topic response detection In the second experiment off-topic response detection is investigated. Performance is measured 1081 N System # Cands. REF ASR 1 KNN 490 22.1 21.9 RNN1 19.7 20.5 RNN2 10004 10.8 11.2 2 KNN 490 15.9 16.0 RNN1 13.7 16.1 RNN2 10004 6.8 7.6 3 KNN 490 13.5 14.3 RNN1 10.4 11.2 RNN2 10004 6.4 7.2 4 KNN 490 11.1 12.5 RNN1 8.8 10.0 RNN2 10004 5.2 6.4 Table 4: N-Best % false rejection performance of KNN and RNNLM classifiers with the LSA topic space on the DEV test set in terms of the false acceptance (FA) probability of an off-topic response and false rejection (FR) probability of an on-topic response. The experiment is run on DEV and EVAL test sets. Since neither DEV nor EVAL contain real off-topic responses, a pool Wq of such responses is synthetically generated for each question by using valid responses to other questions in the data set. Offtopic responses are then selected from this pool. A selection strategy defines which responses are present in Wq. Rather than using a single selection of off-topic responses, an expected performance over all possible off-topic response selections is estimated. The overall probability of falsely accepting an off-topic response can be expressed using equation 19. P(FA) = Q X q=1 X w∈Wq P(FA|w, q)P(w|q)P(q) (19) In equation 19, the question q is selected with uniform probability from the set Q of possible questions. The candidate randomly selects with uniform probability P(w|q) a response w from the pool Wq. The correct response to the question is not present in the pool. The conditional probability of false accept P(FA|w, q) = 1 if M(q) ∈ˆtN, and M(q) is not the real topic of the response w, otherwise P(FA|w, q) = 0. As shown in Figure 2, the main confusions will occur if the response is from the same section as the question. Two strategies for selecting off-topic responses are considered based on this: naive, where an incorrect response can be selected from any section; and directed, where an incorrect response can only be selected from the same section as the question. The naive strategy represents candidates who have little knowledge of the system and memorise responses unrelated to the test, while the directed strategy represents those who are familiar with the test system and have access to real responses from previous tests. Test Set System % Equal Error Rate Directed Naive DEV KNN 13.5 10.0 RNN1 10.0 7.5 RNN2 7.5 6.0 EVAL KNN 12.5 9.0 RNN1 8.0 6.0 RNN2 5.0 4.5 Table 5: % Equal Error Rate for LSA topic space systems on the DEV and EVAL test sets. Figure 3: ROC curves of LSA topic space systems on the EVAL test set. A Receiver Operating Characteristic (ROC) curve (Figure 3) can be constructed by plotting the FA and FR rates for a range of N. The RNN1 system performs better at all operating points than the KNN system for both selection strategies and evaluation test sets. Equal Error Rates (EER), where FA = FR, are given in Table 5. Results on EVAL are more representative of the difference between the KNN and RNN performance, as they are evaluated on nearly 3 times as many candidates. The 1082 RNN2 system achieves the lowest EER. It is interesting that for better systems the difference in performance against the naive and directed strategies decreases. This indicates that the systems become increasingly better at discriminating between similar questions. As expected, the equal error rate for the directed strategy is higher than for the naive strategy. In relation to the stated task of detecting when a test candidate is giving a memorized response, the naive strategy represents a lowerbound on realistic system performance, as students are not likely to respond with a valid response to a different question. Most likely they will fail to construct a valid response or will add completely unrelated phrases memorised beforehand, which, unlike responses from other sections, may not come from the same domain as the test (eg: Business for BULATs). 6 Conclusion and Future Work In this work a novel off-topic content detection framework based on topic-adapted RNNLMs was developed. The system was evaluated on the task of detecting off-topic spoken responses on the BULATS test. The proposed approach achieves better topic classification and off-topic detection performance than the standard approaches. A limitation of both the standard and proposed approach is that if a new question is created by the test-makers, then it will be necessary to collect example responses before it can be widely deployed. However, since the system can be trained on ASR transcriptions, the example responses do not need to be hand-transcribed. This is an attractive deployment scenario, as only a smaller hand-transcribed data set is needed to train an ASR system with which to cost-effectively transcribe a large number of candidate recordings. Further exploration of different topic vector representations and their combinations is necessary in future work. Acknowledgements This research was funded under the ALTA Institute, University of Cambridge. Thanks to Cambridge English, University of Cambridge, for supporting this research and providing access to the BULATS data. References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022, March. Lucy Chambers and Kate Ingham. 2011. The BULATS online speaking test. Research Notes, 43:21– 25. Xie Chen, Yongqiang Wang, Xunying Liu, Mark J.F. Gales, and P.C. Woodland. 2014. Efficient GPUbased Training of Recurrent Neural Network Language Models Using Spliced Sentence Bunch. In Proc. INTERSPEECH. Xie Chen, Tian Tan, Xunying Liu, Pierre Lanchantin, Moquan Wan, Mark J.F. Gales, and Philip C. Woodland. 2015. Recurrent Neural Network Language Model Adaptation for Multi-Genre Broadcast Speech Recognition. In Proc. INTERSPEECH. X. Chen, X. Liu, Y. Qian, M.J.F. Gales, and P.C. Woodland. 2016. CUED-RNNLM – an open-source toolkit for efficient training and evaluation of recurrent neural network language models. In Proc. ICASSP. Keelan Evanini and Xinhao Wang. 2014. Automatic detection of plagiarized spoken responses. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications. Keelan Evanini, Shasha Xie, and Klaus Zechner. 2013. Prompt-based Content Scoring for Automated Spoken Language Assessment. In Proc. NAACL-HLT. Thomas L. Griffiths and Mark Steyvers. 2004. Finding Scientific Topics. Proceedings of the National Academy of Sciences, 101:5228–5235. Thomas K Landauer, Peter W. Foltz, and Darrell Laham. 1998. Introduction to Latent Semantic Analysis. Discourse Processes, 25:259–284. Xunying Liu, Y. Wang, Xie Chen, Mark J.F. Gales, and Philip C. Woodland. 2014. Efficient Lattice Rescoring using Recurrent Neural Network Language Models. In Proc. INTERSPEECH. Angeliki Metallinou and Jian Cheng. 2014. Using Deep Neural Networks to Improve Proficiency Assessment for Children English Language Learners. In Proc. INTERSPEECH. Tomas Mikolov and Geoffrey Zweig. 2012. Context Dependent Recurrent Neural Network Language Model. In Proc. IEEE Spoken Language Technology Workshop (SLT). Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent Neural Network Based Language Model. In Proc. INTERSPEECH. 1083 Tomas Mikolov. 2012. Statistical Language Models Based on Neural Networks. Ph.D. thesis, Brno University of Technology. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830. Xuan-Hieu Phan and Cam-Tu Nguyen. 2007. GibbsLDA++: A C/C++ implementation of latent Dirichlet allocation (LDA). http://gibbslda. sourceforge.net/. Barbara Seidlhofer. 2005. English as a lingua franca. ELT journal, 59(4):339. A Stolcke. 2002. SRILM – an extensible language modelling toolkit. In Proc. ICSLP. Haipeng Wang, Anton Ragni, Mark J. F. Gales, Kate M. Knill, Philip C. Woodland, and Chao Zhang. 2015. Joint Decoding of Tandem and Hybrid Systems for Improved Keyword Spotting on Low Resource Languages. In Proc. INTERSPEECH. Shasha Xie, Keelan Evanini, and Klaus Zechner. 2012. Exploring Content Features for Automated Speech Scoring. In Proc. NAACL-HLT. Helen Yannakoudakis. 2013. Automated assessment of English-learner writing. Technical Report UCAM-CL-TR-842, University of Cambridge Computer Laboratory. Su-Youn Yoon and Shasha Xie. 2014. SimilarityBased Non-Scorable Response Detection for Automated Speech Scoring. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications. Steve Young, Gunnar Evermann, Mark J. F. Gales, Thomas Hain, Dan Kershaw, Xunying (Andrew) Liu, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, and Phil Woodland. 2009. The HTK book (for HTK Version 3.4.1). University of Cambridge. Klaus Zechner, Derrick Higgins, Xiaoming Xi, and David M. Williamson. 2009. Automatic scoring of non-native spontaneous speech in tests of spoken English. Speech Communication, 51(10):883–895. Spoken Language Technology for Education Spoken Language. Chau Zhang and Philip C. Woodland. 2015. A General Artificial Neural Network Extension for HTK. In Proc. INTERSPEECH. 1084
2016
102
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1085–1094, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Synthesizing Compound Words for Machine Translation Austin Matthews and Eva Schlinger and Alon Lavie and Chris Dyer Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {austinma,eschling,alavie,cdyer}@cs.cmu.edu Abstract Most machine translation systems construct translations from a closed vocabulary of target word forms, posing problems for translating into languages that have productive compounding processes. We present a simple and effective approach that deals with this problem in two phases. First, we build a classifier that identifies spans of the input text that can be translated into a single compound word in the target language. Then, for each identified span, we generate a pool of possible compounds which are added to the translation model as “synthetic” phrase translations. Experiments reveal that (i) we can effectively predict what spans can be compounded; (ii) our compound generation model produces good compounds; and (iii) modest improvements are possible in end-to-end English–German and English–Finnish translation tasks. We additionally introduce KomposEval, a new multi-reference dataset of English phrases and their translations into German compounds. 1 Introduction Machine translation systems make a closedvocabulary assumption: with the exception of basic rules for copying unknown word types from the input to the output, they can produce words in the target language only from a fixed, finite vocabulary. While this is always a naïve assumption given the long-tailed distributions that characterize natural language, it is particularly challenging in languages such as German and Finnish that have productive compounding processes. In such languages, expressing compositions of basic concepts can require an unbounded number of words. For example, English multiword phrases like market for bananas, market for pears, and market for plums are expressed in German with single compound words (respectively, as Bananenmarkt, Birnenmarkt, and Pflaumenmarkt). Second, while they are individually rare, compound words are, on the whole, frequent in native texts (Baroni et al., 2002; Fritzinger and Fraser, 2010). Third, compounds are crucial for translation quality. Not only does generating them make the output seem more natural, but they are contentrich. Since each compound has, by definition, at least two stems, they are intuitively (at least) doubly important for translation adequacy. Fortunately, compounding is a relatively regular process (as the above examples also illustrate), and it is amenable to modeling. In this paper we introduce a two-stage method (§2) to dynamically generate novel compound word forms given a source language input text and incorporate these as “synthetic rules” in a standard phrase-based translation system (Bhatia et al., 2014; Chahuneau et al., 2013; Tsvetkov et al., 2013). First, a binary classifier examines each source-language sentence and labels each span therein with whether that span could become a compound word when translated into the target language. Second, we transduce the identified phrase into the target language using a word-to-character translation model. This system makes a closed vocabulary assumption, albeit at the character (rather than word) level—thereby enabling new word forms to be generated. Training data for these models is extracted from automatically aligned and compound split parallel corpora (§3). We evaluate our approach on both intrinsic and extrinsic metrics. Since German compounds are relatively rare, their impact on the standard MT evaluation metrics (e.g., BLEU) is minimal, as we 1085 show with an oracle experiment, and we find that our synthetic phrase approach obtains only modest improvements in overall translation quality. To better assess its merits, we commissioned a new test set, which we dub KomposEval (from the German word for a compound word, Komposita), consisting of a set of 1090 English phrases and their translations as German compound words by a professional English–German translator. The translator was instructed to produce as many compoundword translations as were reasonable (§4). This dataset permits us to evaluate our compound generation component directly, and we show that (i) without mechanisms for generating compound words, MT systems cannot produce the long tail of compounds; and (ii) our method is an effective method for creating correct compounds. 2 Compound Generation via Rule Synthesis Suppose we want to translate the sentence the market for bananas has collapsed . from English into German. In order to produce the following (good) translation, der bananenmarkt ist abgestürzt . a phrase-based translation system would need to contain a rule similar to market for bananas → bananenmarkt. While it is possible that such a rule would be learned from parallel corpora using standard rule extraction techniques, it is likely that such a rule would not exist (unless the system were trained on the translation examples from this paper). We solve the compound translation problem by “filling in” such missing rule gaps in the phrase table. The process takes place in two parts: first, identifying spans in the input that appear to be translatable as compounds (§2.1), and second, generating candidate compounds for each positively identified span (§2.2). Since synthesized rules compete along side rules which are learned using standard rule extraction techniques (and which are often quite noisy), our rule synthesis system can overgenerate rule candidates, a fact which we exploit in both phases. 2.1 Phase I: Classifying Compoundable Spans Given a source sentence, we classify each span therein (up to some maximum length) as either compoundable or non-compoundable using independent binary predictions. Rather than attempting to hand-engineer features to represent phrases, we use a bidirectional LSTM to learn a fixed-length vector representation hi,j that is computed by composing representations of the tokens (fi, fi+1, . . . , fj) in the input sentence. The probability that a span is compoundable is then modeled as: p(compoundable? |fi, fi+1, . . . , fj) = σ  w⊤tanh(Vhi,j + b) + a  , where σ is the logistic sigmoid function, and w, V, b, and a are parameters. To represent tokens that are inputs to the LSTM, we run a POS tagger (Toutanova et al., 2003), and for each token concatenate a learned embedding of the tag and word. Figure 1 shows the architecture. market for bananas </s> <s> <s> NN IN NNS </s> p(not a compound) p(is a compound) MLP hidden layer Forward LSTM Backward LSTM Concatenated Embeddings Part-of-speech Embeddings Word Embeddings Figure 1: A graphical representation of the neural network used for classifying whether an input source phrase should or should not turn into a compound word in the target language 2.2 Phase II: Generating Compound Words The second stage of our compound-generating pipeline is to generate hypothesis compound words for each source phrase that was identified as “compoundable” by the classifier just discussed. We do this by using a word-to-character–based machine translation system, which enables us to reuse a standard phrase-based decoder for compound generation. 1086 2.2.1 Generation Model The cornerstone of our generation approach is the forward and backward lexical translation tables learned by an unsupervised word aligner. We combine these two translation tables to create a wordto-character phrase table compatible with a standard decoder. This table allows our generator to know the correct translations of individual morphemes, but alone does not allow the generator to build full compound words. To capture the small bits of “phonetic glue” (e.g., the n that occurs between banane and markt in the compound bananenmarkt) that may occur when generating compound words, we insert a special SUF symbol in between each pair of source words. This symbol will allow us to insert a small suffix in between the translations of source words. Finally, we insert a special END symbol at the end of each source phrase. This symbol will allow the model to generate morphological variants due to suffixes indicating case, number, and agreement that only occur at the end of a whole compound word, but not in between the individual pieces. Some examples of all three types of rules are shown in Table 1. 2.2.2 Reordering and Word Dropping We observe that in order to generate many compounds, including bananenmarkts from “market for bananas”, a system must be able to both reorder and drop source words at will. Implemented naïvely, however, these allowances may produce invalid interleavings of source words and SUF/END tokens. For example, if we (correctly) drop the word “for” from our example, we might feed the decoder the sequence “market SUF SUF bananas END. To disallow such bogus input sequences we disable all reordering inside the decoder, and instead encode all possible reorderings in the form of an input lattice (Dyer et al., 2008). Moreover, we allow the decoder to drop non-content words by skipping over them in the lattice. Each edge in our lattices contains a list of features, including the indices, lexical forms, and parts of speech of each word kept or dropped. Each possible sequence in the lattice also encodes features of the full path of source words kept, the full list of source words dropped, the parts of speech of the path and all dropped words, and the order of indices traversed. With these constraints in place we can train the compound generator as though it were a normal MT system with no decode-time reordering. 3 Training Our approach to generating compound word forms in translation has two stages. First, we build a classifier that chooses spans of source text that could produce target compounds. Second, we build a compound generator that outputs hypothesis word forms, given a source phrase. We will detail each of these steps in turn. 3.1 Extracting Compounds from Bitext In order to learn to generate compound words we naturally require training data. Ideally we would like a large list of English phrases with their natural contexts and translations as German compounds. Of course virtually no such data exists, but it is possible to extract from parallel data, using a technique similar to that used by Tsvetkov and Wintner (2012). To this end, we take our tokenized bitext and pass it through Dyer (2009)’s German compound splitter. We then align the segmented variant using the fast_align tool in both the forward and reverse directions, which produces both word alignments and lexical translation tables, which give the probability of a compound part given an English phrase. We then symmetrize the produced pair of alignments with the intersection heuristic. This results in a sparse alignment in which each target word is aligned to either 0 or 1 source words. We then undo any splits performed by the compound splitter, resulting in a corpus where the only words aligned many-to-one are precisely well-aligned compounds. This process produces two crucially important data. First, a list of English phrase pairs that may become compound words in German on which we train our classifier. Second, the lexical translation tables, trained on compound split German data, which form the basis of our generation approach. 3.2 Training the Compoundability Classifier The network is trained to maximize cross-entropy of its training data using the Adam optimizer (Kingma and Ba, 2014) until performance on a held-out dev set stops improving. Due to the fact that we only need to represent the “compoundability” of each source-language word, and not its full semantics, we find that very small (10-dimensional) word and POS em1087 Source Target Non-Zero Features bananas b a n a n e φfwd = −0.495208 φrev = −0.455368 market m a r k t φfwd = −0.499118 φrev = −0.269879 SUF n φfwd = −3.718241 φuses_suf_n = 1.0 END s φfwd = −2.840721 φuses_end_s = 1.0 Table 1: A fragment of the word-to-character rules used in the compound generation system. beddings work well. The recurrent part of the neural network uses two-layer LSTM (Hochreiter and Schmidhuber, 1997) cells with the hidden layer size set to 10. The final MLP’s hidden layer size is also set to 10. The training data is processed such that each span of length two to four is considered one training example, and is labeled as positive if it is wellaligned (Brown et al., 1993) to a single German compound word. Since most spans do not translate as compounds, we are faced with an extreme class imbalance problem (a ratio of about 300:1). We therefore experiment with down sampling the negative training examples to have an equal number of positive and negative examples. 3.3 Training the Compound Generation Model As a translation model, there are two components to learning the translation system: learning the rule inventory and their features (§3.3.1) and learning the parameters of the generation model (§3.3.2). 3.3.1 Learning Word to Character Sequence Translation Rules The possible translations of SUF and END are learned from the list of positive training examples extracted for our classifier. For each example, we find all the possible ways the source words could translate, in accordance with our translation table, into nonoverlapping substrings of the target word. Any left over letters in between pieces become possible translations of SUF, while extra letters at the end of the target string become possible translations of END. Probabilities for each translation are estimated by simply counting and normalizing the number of times each candidate was seen. See Figure 2 for an example of this splitting process. 3.3.2 Learning Generator Feature Weights Since the generator model is encoded as a phrasebased machine translation system, we can train it using existing tools for this task. We choose to train using MIRA (Crammer and Singer, 2003), and use a 10-gram character-based language model trained on the target side of the positive training examples extracted for the classifier. 4 KomposEval Data Set To evaluate our compound generator we needed a dataset containing English phrases that should be compounded along with their German translations. To the best of our knowledge, no substantial human-quality dataset existed, so we created one as part of this work. We took our list of automatically extracted (English phrase, German compound) pairs and manually selected 1090 of them that should compound. We then asked a native German speaker to translate each English phrase into German compounds, and to list as many possibile compound translations as she could think of. The result is a test set consisting of 1090 English phrases, with between 1 and 5 possible German compound translations for each English phrase. This test set is published as supplementary material with this article. Some example translations are shown in Table 2. Source phrase Reference(s) transitional period Übergangsphase Übergangsperiode Übergangszeitraum Chamber of deputies Abgeordnetenhaus Abgeordnetenkammer self-doubt Selbstzweifel Table 2: Examples of human-generated compounds from the KomposEval data set 5 Experiments Before considering the problem of integrating our compound model with a full machine translation system, we perform an intrinsic evaluation of each of the two steps of our pipeline. 1088 pflaumenmarkts plums market for Translation Table Input Phrase market for plums Target Compound Possible Analyses pflaume+n mark+ts pflaumen+ε mark+ts pflaume+n markt+s pflaumen+ε markt+s pflaume+n markts+ε pflaumen+ε markts+ε SUF Counts END Counts ε n ε s ts 3 2 2 2 3 {ε, pflaume, pflaumen} {ε} {ε, mark, markt, markts} Figure 2: Decomposition of a target compound into possible analyses, given a source phrase and a morpheme-level translation table. This process allows us to learn the “phonetic glue” that can go in between morphemes, as well as the inflections that can attach to the end of a compound word. 5.1 Classifier Intrinsic Evaluation We evaluate the effectiveness of our classifier, by measuring its precision and recall on the two held out test sets described in §2.1 taken from two language pairs: English–German and English– Finnish. Furthermore, we show results both with down-sampling (balanced data set) and without down-sampling (unbalanced data set). Our classifier can freely trade off precision and recall by generalizing its requirement to call an example positive from p(compound | span) > 0.5 to p(compound | span) > τ, for τ ∈(0, 1), allowing us to report full precision-recall curves (Figure 3). We find that our best results for the unbalanced cases come at τ = 0.24 for German and τ = 0.29 for Finnish, with F-scores of 20.1% and 67.8%, respectively. In the balanced case, we achieve 67.1% and 97.0% F-scores with τ = 0.04 and τ = 0.57 on German and Finnish respectively. 5.2 Generator Instrinsic Evaluation To evaluate our compound generator, we fed it the source side of our newly created KomposEval corpus and had it output a 100-best list of hypotheses translations for each English phrase. From this we are able to compute many intrinsic quality metrics. We report the following metrics: • Mean reciprocal rank (MRR); which is one divided by the average over all segments of the position that the reference translation appears in our k-best list. • Character error rate (CER), or the average number of character-level edits that are required to turn our 1-best hypothesis into the Figure 3: Precision-recall curves for our compound classifier for two languages: German (red) and Finnish (blue). Unbalanced test set results are shown with solid lines. Balanced test set results are shown with dashed lines. nearest of the reference translations. • Precision at 1, 5, and 10, which indicate what percentage of the time a reference translation can be found in the top 1, 5, or 10 hypotheses of our k-best list, respectively. These results can be found in Table 3. We compare to a naïve baseline that is just a standard English–German phrase-based translation system with no special handling of compound word forms. We immediately see that the baseline system is simply unable to generate most of the compound words in the test set, resulting in extraordinarily low metric scores across the board. Its one saving grace is its tolerable CER score, which shows that the system is capable of generating the correct morphemes, but is failing to correctly ad1089 MRR ↑CER ↓P@1 ↑P@5 ↑P@10 ↑ Baseline <0.01 3.305 0% 0% <0.01% Our model 0.7004 2.506 61.38% 81.47% 84.31% Table 3: Mean reciprocal rank, character error rate, and precision at K statistics of our baseline MT system and our compound generator. join them and add the phonological glue required to produce a well-formed compound word. Our system, on the other hand, is capable of reaching at least one of the five references for every single sentence in the test set, and has a reference translation in the top 5 hypotheses in its k-best list over 80% of the time. Qualitatively, the compounds generated by our model are remarkably good, and very understandable. Major error classes include incorrect word sense, non-compositional phrases, and special non- concatenative effects at word boundaries. An example of each of these errors, along with some examples of good compound generation can be found in Table 4. 5.3 Extrinsic Translation Evaluation Finally, we use our compound generator as part of a larger machine translation pipeline. We run our compound span classifier on each of our translation system’s tune and test sets, and extract our generator’s top ten hypotheses for each of the postively identified spans. These English phrases are then added to a synthetic phrase table, along with their German compound translations, and two features: the compound generator’s score, and an indicator feature simply showing that the rule represents a synthetic compound. Table 5 shows some example rules of this form. The weights of these features are learned, along with the standard translation system weights, by the MIRA algorithm as part of the MT training procedure. The underlying translation system is a standard Hiero (Chiang et al., 2005) system using the cdec (Dyer et al., 2010) decoder, trained on all constrained-track WMT English–German data as of the 2014 translation task. Tokenization was done with cdec’s tokenize-anything script. The first character of each sentence was down cased if the unigram probability of the downcased version of the first word was higher than that of the original casing. Word alignment was performed using cdec’s fast_align tool, BLEU ↑METR ↑TER ↓Len WMT2012 Baseline 16.2 34.5 64.8 94.1 +Our Compounds 16.3 34.6 64.9 94.2 +Oracle Compounds 16.9 35.2 64.6 95.5 WMT2013* Baseline 18.8 37.3 62.1 93.6 +Our Compounds 18.9 37.5 62.3 96.7 +Oracle Compounds 19.7 38.2 61.9 97.6 WMT2014 Baseline 19.6 38.9 64.3 103.5 +Compounds 19.6 39.0 64.5 103.9 +Oracle Compounds 21.7 40.9 61.1 100.6 Table 6: Improvements in English–German translation quality using our method of compound generation on WMT 2012, 2013, and 2014. * indicates the set used for tuning the MT system. and symmetrized using the grow-diag heuristic. Training is done using cdec’s implementation of the MIRA algorithm. Evaluation was done using MultEval (Clark et al., 2011). A 4-gram language model was estimated using KenLM’s lmplz tool (Heafield et al., 2013). In addition to running our full end-to-end pipeline, we run an oracle experiment wherein we run the same pre-processing pipeline (compound splitting, bidirectionally aligning, intersecting, and de-splitting) on each test set to identify which spans do, in fact, turn into compounds, as well as their ideal translations. We then add grammar rules that allow precisely these source spans to translate into these oracle translations. This allows us to get an upper bound on the impact compound generation could have on translation quality. The results, summarized in Table 6 and Table 7, show that adding these extra compounds has little effect on metric scores compared to our baseline system. Nevertheless, we believe that the qualitative improvements of our methods are more significant than the automatic metrics would indicate. Our method targets a very specific problem that pertains only to dense content-bearing target words that humans find very important. Moreover, BLEU is unable to reasonably evaluate improvements in these long tail phenomena, as it only captures exact lexical matches, and because we are purposely generating fewer target words than a standard translation system. 6 Related Work Most prior work on compound generation has taken a different approach from the one advo1090 Input Hypothesis Reference Comments cheese specialities Fachkäse Käsespezialitäten Wrong sense of “specialties” band-aid Band-hilfe (Should not compound) Idiosyncratic meaning church towers Kirchentürme Kirchtürme Extra word-internal case marking sugar beet farmers Zuckerrübenbauern Zuckerrübenbauern Perfect tomato processing Tomatenverarbeitung Tomatenverarbeitung Perfect generation of electricity Stromerzeugung Stromerzeugung Perfect, including reordering Table 4: Examples of erroneous (top) and correct (bottom) compounds generated by our system Source Target Non-Zero Features market for bananas bananenmarkt φCompound = 1 φScore = −38.9818 market for bananas bananenmarktes φCompound = 1 φScore = −49.8976 market for bananas marktordnung φCompound = 1 φScore = −53.2197 market for bananas bananenmarkts φCompound = 1 φScore = −54.4962 market for bananas binnenmarkt φCompound = 1 φScore = −57.6816 Table 5: Example synthetic rules dynamically added to our system to translate the phrase “market for bananas” into a German compound word. Note that we correctly generate both the nominative form (with no suffix) and the genitive forms (with the -s and -es suffixes). BLEU ↑METR ↑TER ↓Len Dev* Baseline 12.3 29.0 72.7 96.5 +Our Compounds 12.3 29.1 72.8 96.8 DevTest Baseline 11.4 29.9 71.6 96.2 +Our Compounds 11.6 30.1 71.5 96.4 Test Baseline 10.8 28.4 73.4 96.7 +Our Compounds 10.9 28.5 73.3 96.9 Table 7: Improvements in English–Finnish translation quality using our method of compound generation on WMT 2014 tuning, devtest, and test sets. * indicates the set used for tuning the MT system. cated here, first translating the source language into a morphologically analyzed and segmented variant of the target language, and then performing morphological generation on this sequence (Cap et al., 2014; Irvine and Callison-Burch, 2013; Denkowski et al., 2013; Clifton and Sarkar, 2011; Stymne and Cancedda, 2011). Requesting multiple translations from a translator has been used in the past, most notably to create HyTER reference lattices (Dreyer and Marcu, 2012). However, in contrast to full-sentence translations the space of possible grammatical compounds is far smaller, substantially simplifying our task. The splitting of German compound phrases for translation from German into English has been addressed by Koehn and Knight (2001) and Dyer (2009). They elegantly solve the problem of having a large, open vocabulary on the source side by splitting compound words into their constituent morphemes and translating German into English at the morpheme level. Their approach works excellently when translating out of a compounding language, but is unable to generate novel compound words in the target language without some sort of post processing. Dynamic generation of compounds in a target language using such post processing has been examined in the past by Cap et al. (2014) and Clifton and Sarkar (2011). Both perform compound splitting on their parallel data, train a morphemebased translation system, and then stitch compound words back together using different models. While effective, their approach runs into difficulties if the morphemes that should compound get separated by the reordering model during the translation process. Both address this using more complicated models, whereas our holistic approach handles this problem seamlessly. Stymne (2012) gives an excellent taxonomy of compound types in Germanic languages, and discusses many different strategies that have been used to split and merge them for the purposes of machine translation. She identifies several difficulties with the split-translate-merge approach and points out some key subtleties, such as handling of bound morphemes that never occur outside of 1091 compounds, that one must bear in mind when doing translation to or from compounding languages. The idea of using entirely character-based translation systems was introduced by Vilar et al. (2007). While their letter-level translation system alone did not outperform standard phrase-based MT on a Spanish–Catalan task, they demonstrated substantial BLEU gains when combining phraseand character-based translation models, particularly in low resource scenarios. 7 Conclusion In this paper we have presented a technique for generating compound words for target languages with open vocabularies by dynamically introducing synthetic translation options that allow spans of source text to translate as a single compound word. Our method for generating such synthetic rules decomposes into two steps. First an RNN classifier detects compoundable spans in the source sentence. Second, a word-to-character machine translation system translates the span of text into a compound word. By dynamically adding compound words to our translation grammars in this way we allow the decoder, which is in turn informed by the language model, to determine which, if any, of our hypothesized compounds look good in context. Our approach does away with the need for post processing, and avoids complications caused by reordering of morphemes in previous approaches. However, this technique relies heavily on a strong target language model. Therefore, one important extension of our work is to further study the interaction between our model and the underlying language model. In addition to our generation technique we have presented a new human-quality data set that specifically targets compounding and use it to demonstrate tremendous improvements in our translation system’s ability to correctly generalize from compound words found in parallel text to match human translations of unseen compoundable phrases. 1092 Acknowledgements We thank the anonymous reviewers for their careful reading of the submitted draft of this paper. Furthermore, we thank Isabelle Wolf for her work in creating the KomposEval data set. This research work was supported by a Google faculty research award and by computing resources provided by the NSF-sponsored XSEDE program under grant TG-CCR110017. The statements made herein are solely the responsibility of the authors. References Marco Baroni, Johannes Matiasek, and Harald Trost. 2002. Predicting the components of german nominal compounds. In ECAI, pages 470–474. Archna Bhatia, Chu-Cheng Lin, Nathan Schneider, Yulia Tsvetkov, Fatima Talib Al-Raisi, Laleh Roostapour, Jordan Bender, Abhimanu Kumar, Lori Levin, Mandy Simons, et al. 2014. Automatic classification of communicative functions of definiteness. Association for Computational Linguistics. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. Fabienne Cap, Alexander Fraser, Marion Weller, and Aoife Cahill. 2014. How to produce unseen teddy bears: Improved morphological processing of compounds in SMT. In Proc. EACL. Victor Chahuneau, Eva Schlinger, Noah A Smith, and Chris Dyer. 2013. Translating into morphologically rich languages with synthetic phrases. David Chiang, Adam Lopez, Nitin Madnani, Christof Monz, Philip Resnik, and Michael Subotin. 2005. The hiero machine translation system: Extensions, evaluation, and analysis. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 779–786. Association for Computational Linguistics. Jonathan H Clark, Chris Dyer, Alon Lavie, and Noah A Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 176–181. Association for Computational Linguistics. Ann Clifton and Anoop Sarkar. 2011. Combining morpheme-based machine translation with postprocessing morpheme prediction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 32–42. Association for Computational Linguistics. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. The Journal of Machine Learning Research, 3:951– 991. Waleed Ammar Victor Chahuneau Michael Denkowski, Greg Hanneman, Wang Ling Austin Matthews Kenton Murray, Nicola Segall Yulia Tsvetkov, and Alon Lavie Chris Dyer. 2013. The cmu machine translation systems at wmt 2013: Syntax, synthetic translation options, and pseudoreferences. In 8th Workshop on Statistical Machine Translation, page 70. Markus Dreyer and Daniel Marcu. 2012. Hyter: Meaning-equivalent semantics for translation evaluation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 162–171. Association for Computational Linguistics. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice translation. Technical report, DTIC Document. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Johnathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of ACL. Chris Dyer. 2009. Using a maximum entropy model to build segmentation lattices for mt. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 406–414. Association for Computational Linguistics. Fabienne Fritzinger and Alexander Fraser. 2010. How to avoid burning ducks: combining linguistic analysis and corpus statistics for german compound processing. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 224–234. Association for Computational Linguistics. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 690–696, Sofia, Bulgaria, August. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Ann Irvine and Chris Callison-Burch. 2013. Supervised bilingual lexicon induction with multiple monolingual signals. In HLT-NAACL, pages 518– 523. 1093 Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn and Kevin Knight. 2001. Knowledge sources for word-level translation models. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, pages 27–35. Sara Stymne and Nicola Cancedda. 2011. Productive generation of compound words in statistical machine translation. In Proc. WMT. Sara Stymne. 2012. Text harmonization strategies for phrase-based statistical machine translation. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 173–180. Association for Computational Linguistics. Yulia Tsvetkov and Shuly Wintner. 2012. Extraction of multi-word expressions from small parallel corpora. Natural Language Engineering, 18(04):549– 573. Yulia Tsvetkov, Chris Dyer, Lori Levin, and Archna Bhatia. 2013. Generating English determiners in phrase-based translation with synthetic translation options. In Proc. WMT. David Vilar, Jan-T Peter, and Hermann Ney. 2007. Can we translate letters? In Proceedings of the Second Workshop on Statistical Machine Translation, pages 33–39. Association for Computational Linguistics. 1094
2016
103
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1095–1104, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Harnessing Cognitive Features for Sarcasm Detection Abhijit Mishra†, Diptesh Kanojia†, Seema Nagar ⋆, Kuntal Dey⋆, Pushpak Bhattacharyya† †Indian Institute of Technology Bombay, India ⋆IBM Research, India †{abhijitmishra, diptesh, pb}@cse.iitb.ac.in ⋆{senagar3, kuntadey}@in.ibm.com Abstract In this paper, we propose a novel mechanism for enriching the feature vector, for the task of sarcasm detection, with cognitive features extracted from eye-movement patterns of human readers. Sarcasm detection has been a challenging research problem, and its importance for NLP applications such as review summarization, dialog systems and sentiment analysis is well recognized. Sarcasm can often be traced to incongruity that becomes apparent as the full sentence unfolds. This presence of incongruity- implicit or explicit- affects the way readers eyes move through the text. We observe the difference in the behaviour of the eye, while reading sarcastic and non sarcastic sentences. Motivated by this observation, we augment traditional linguistic and stylistic features for sarcasm detection with the cognitive features obtained from readers eye movement data. We perform statistical classification using the enhanced feature set so obtained. The augmented cognitive features improve sarcasm detection by 3.7% (in terms of Fscore), over the performance of the best reported system. 1 Introduction Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule 1. Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such non-linguistic modalities. This makes recognizing textual sarcasm more challenging for both humans and machines. 1The Free Dictionary Sarcasm detection plays an indispensable role in applications like online review summarizers, dialog systems, recommendation systems and sentiment analyzers. This makes automatic detection of sarcasm an important problem. However, it has been quite difficult to solve such a problem with traditional NLP tools and techniques. This is apparent from the results reported by the survey from Joshi et al. (2016). The following discussion brings more insights into this. Consider a scenario where an online reviewer gives a negative opinion about a movie through sarcasm: “This is the kind of movie you see because the theater has air conditioning”. It is difficult for an automatic sentiment analyzer to assign a rating to the movie and, in the absence of any other information, such a system may not be able to comprehend that prioritizing the air-conditioning facilities of the theater over the movie experience indicates a negative sentiment towards the movie. This gives an intuition to why, for sarcasm detection, it is necessary to go beyond textual analysis. We aim to address this problem by exploiting the psycholinguistic side of sarcasm detection, using cognitive features extracted with the help of eye-tracking. A motivation to consider cognitive features comes from analyzing human eyemovement trajectories that supports the conjecture: Reading sarcastic texts induces distinctive eye movement patterns, compared to literal texts. The cognitive features, derived from human eye movement patterns observed during reading, include two primary feature types: 1. Eye movement characteristic features of readers while reading given text, comprising gaze-fixaions (i.e,longer stay of gaze on a visual object), forward and backward saccades (i.e., quick jumping of gaze between two positions of rest). 1095 2. Features constructed using the statistical and deeper structural information contained in graph, created by treating words as vertices and saccades between a pair of words as edges. The cognitive features, along with textual features used in best available sarcasm detectors, are used to train binary classifiers against given sarcasm labels. Our experiments show significant improvement in classification accuracy over the state of the art, by performing such augmentation. Feasibility of Our Approach Since our method requires gaze data from human readers to be available, the methods practicability becomes questionable. We present our views on this below. Availability of Mobile Eye-trackers Availability of inexpensive embedded eye-trackers on hand-held devices has come close to reality now. This opens avenues to get eye-tracking data from inexpensive mobile devices from a huge population of online readers non-intrusively, and derive cognitive features to be used in predictive frameworks like ours. For instance, Cogisen: (http://www.sencogi.com) has a patent (ID: EP2833308-A1) on “eye-tracking using inexpensive mobile web-cams”. Applicability Scenario We believe, mobile eye-tracking modules could be a part of mobile applications built for e-commerce, online learning, gaming etc. where automatic analysis of online reviews calls for better solutions to detect linguistic nuances like sarcasm. To give an example, let’s say a book gets different reviews on Amazon. Our system could watch how readers read the review using mobile eye-trackers, and thereby, decide whether the text contains sarcasm or not. Such an application can horizontally scale across the web and will help in improving automatic classification of online reviews. Since our approach seeks human mediation, one might be tempted to question the approach of relying upon eye-tracking, an indirect indicator, instead of directly obtaining man-made annotations. We believe, asking a large number of internet audience to annotate/give feedback on each and every sentence that they read online, following a set of annotation instructions, will be extremely intrusive and may not be responded well. Our system, on the other hand, can be seamlessly integrated into existing applications and as the eye-tracking process runs in the background, users will not be interrupted in the middle of the reading. This, thus, offers a more natural setting where human mediation can be availed without intervention. Getting Users’ Consent for Eye-tracking Eye-tracking technology has already been utilized by leading mobile technology developers (like Samsung) to facilitate richer user experiences through services like Smart-scroll (where a user’s eye movement determines whether a page has to be scrolled or not) and Smart-lock (where user’s gaze position decides whether to lock the screen or not). The growing interest of users in using such services takes us to a promising situation where getting users’ consent to record eyemovement patterns will not be difficult, though it is yet not the current state of affairs. Disclaimer: In this work, we focus on detecting sarcasm in non-contextual and short-text settings prevalent in product reviews and social media. Moreover, our method requires eye-tracking data to be available in the test scenario. 2 Related Work Sarcasm, in general, has been the focus of research for quite some time. In one of the pioneering works Jorgensen et al. (1984) explained how sarcasm arises when a figurative meaning is used opposite to the literal meaning of the utterance. In the word of Clark and Gerrig (1984), sarcasm processing involves canceling the indirectly negated message and replacing it with the implicated one. Giora (1995), on the other hand, define sarcasm as a mode of indirect negation that requires processing of both negated and implicated messages. Ivanko and Pexman (2003) define sarcasm as a six tuple entity consisting of a speaker, a listener, Context, Utterance, Literal Proposition and Intended Proposition and study the cognitive aspects of sarcasm processing. Computational linguists have previously addressed this problem using rule based and statistical techniques, that make use of : (a) Unigrams and Pragmatic features (Carvalho et al., 2009; Gonz´alez-Ib´anez et al., 2011; Barbieri et al., 2014; Joshi et al., 2015) (b) Stylistic patterns (Davidov et al., 2010) and patterns related to situational disparity (Riloff et al., 2013) and (c) Hastag 1096 interpretations (Liebrecht et al., 2013; Maynard and Greenwood, 2014). Most of the previously done work on sarcasm detection uses distant supervision based techniques (ex: leveraging hashtags) and stylistic/pragmatic features (emoticons, laughter expressions such as “lol” etc). But, detecting sarcasm in linguistically well-formed structures, in absence of explicit cues or information (like emoticons), proves to be hard using such linguistic/stylistic features alone. With the advent of sophisticated eyetrackers and electro/magneto-encephalographic (EEG/MEG) devices, it has been possible to delve deep into the cognitive underpinnings of sarcasm understanding. Filik (2014), using a series of eye-tracking and EEG experiments try to show that for unfamiliar ironies, the literal interpretation would be computed first. They also show that a mismatch with context would lead to a re-interpretation of the statement, as being ironic. Camblin et al. (2007) show that in multi-sentence passages, discourse congruence has robust effects on eye movements. This also implies that disrupted processing occurs for discourse incongruent words, even though they are perfectly congruous at the sentence level. In our previous work (Mishra et al., 2016), we augment cognitive features, derived from eye-movement patterns of readers, with textual features to detect whether a human reader has realized the presence of sarcasm in text or not. The recent advancements in the literature discussed above, motivate us to explore gaze-based cognition for sarcasm detection. As far as we know, our work is the first of its kind. 3 Eye-tracking Database for Sarcasm Analysis Sarcasm often emanates from incongruity (Campbell and Katz, 2012), which enforces the brain to reanalyze it (Kutas and Hillyard, 1980). This, in turn, affects the way eyes move through the text. Hence, distinctive eye-movement patterns may be observed in the case of successful processing of sarcasm in text in contrast to literal texts. This hypothesis forms the crux of our method for sarcasm detection and we validate this using our previously released freely available sarcasm dataset2 (Mishra et al., 2016) enriched with gaze 2http://www.cfilt.iitb.ac.in/cognitive-nlp µ S σ S µ NS σ NS t p P1 319 145 196 97 14.1 5.84E-39 P2 415 192 253 130 14.0 1.71E-38 P3 322 173 214 160 9.5 3.74E-20 P4 328 170 191 96 13.9 1.89E-37 P5 291 151 183 76 11.9 2.75E-28 P6 230 118 136 84 13.2 6.79E-35 P7 488 268 252 141 15.3 3.96E-43 Table 1: T-test statistics for average fixation duration time per word (in ms) for presence of sarcasm (represented by S) and its absence (NS) for participants P1-P7. information. 3.1 Document Description The database consists of 1,000 short texts, each having 10-40 words. Out of these, 350 are sarcastic and are collected as follows: (a) 103 sentences are from two popular sarcastic quote websites3, (b) 76 sarcastic short movie reviews are manually extracted from the Amazon Movie Corpus (Pang and Lee, 2004) by two linguists. (c) 171 tweets are downloaded using the hashtag #sarcasm from Twitter. The 650 non-sarcastic texts are either downloaded from Twitter or extracted from the Amazon Movie Review corpus. The sentences do not contain words/phrases that are highly topic or culture specific. The tweets were normalized to make them linguistically well formed to avoid difficulty in interpreting social media lingo. Every sentence in our dataset carries positive or negative opinion about specific “aspects”. For example, the sentence “The movie is extremely well cast” has positive sentiment about the aspect “cast”. The annotators were seven graduate students with science and engineering background, and possess good English proficiency. They were given a set of instructions beforehand and are advised to seek clarifications before they proceed. The instructions mention the nature of the task, annotation input method, and necessity of head movement minimization during the experiment. 3.2 Task Description The task assigned to annotators was to read sentences one at a time and label them with with binary labels indicating the polarity (i.e., positive/negative). Note that, the participants were not 3http://www.sarcasmsociety.com, http://www.themarysue.com/funny-amazon-reviews 1097 instructed to annotate whether a sentence is sarcastic or not., to rule out the Priming Effect (i.e., if sarcasm is expected beforehand, processing incongruity becomes relatively easier (Gibbs, 1986)). The setup ensures its “ecological validity” in two ways: (1) Readers are not given any clue that they have to treat sarcasm with special attention. This is done by setting the task to polarity annotation (instead of sarcasm detection). (2) Sarcastic sentences are mixed with non sarcastic text, which does not give prior knowledge about whether the forthcoming text will be sarcastic or not. The eye-tracking experiment is conducted by following the standard norms in eye-movement research (Holmqvist et al., 2011). At a time, one sentence is displayed to the reader along with the “aspect” with respect to which the annotation has to be provided. While reading, an SR-Research Eyelink-1000 eye-tracker (monocular remote mode, sampling rate 500Hz) records several eye-movement parameters like fixations (a long stay of gaze) and saccade (quick jumping of gaze between two positions of rest) and pupil size. The accuracy of polarity annotation varies between 72%-91% for sarcastic texts and 75%-91% for non-sarcastic text, showing the inherent difficulty of sentiment annotation, when sarcasm is present in the text under consideration. Annotation errors may be attributed to: (a) lack of patience/attention while reading, (b) issues related to text comprehension, and (c) confusion/indecisiveness caused due to lack of context. For our analysis, we do not discard the incorrect annotations present in the database. Since our system eventually aims to involve online readers for sarcasm detection, it will be hard to segregate readers who misinterpret the text. We make a rational assumption that, for a particular text, most of the readers, from a fairly large population, will be able to identify sarcasm. Under this assumption, the eye-movement parameters, averaged across all readers in our setting, may not be significantly distorted by a few readers who would have failed to identify sarcasm. This assumption is applicable for both regular and multi-instance based classifiers explained in section 6. 4 Analysis of Eye-movement Data We observe distinct behavior during sarcasm reading, by analyzing the “fixation duration on the text” (also referred to as “dwell time” in the lit Word ID Time ( miliseconds) P1 P2 P3 S2: The lead actress is terrible and I cannot be convinced she is supposed to be some forensic genius. S1: I'll always cherish the original misconception I had of you.. Figure 1: Scanpaths of three participants for two negatively polar sentences sentence S1 and S2. Sentence S1 is sarcastic but S2 is not. erature) and “scanpaths” of the readers. 4.1 Variation in the Average Fixation Duration per Word Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time (Ivanko and Pexman, 2003). Hence, fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one. We observe this for all participants in our dataset, with the average fixation duration per word for sarcastic texts being at least 1.5 times more than that of non-sarcastic texts. To test the statistical significance, we conduct a twotailed t-test (assuming unequal variance) to compare the average fixation duration per word for sarcastic and non-sarcastic texts. The hypothesized mean difference is set to 0 and the error tolerance limit (α) is set to 0.05. The t-test analysis, presented in Table 1, shows that for all participants, a statistically significant difference exists between the average fixation duration per word for sarcasm (higher average fixation duration) and nonsarcasm (lower average fixation duration). This affirms that the presence of sarcasm affects the duration of fixation on words. It is important to note that longer fixations may also be caused by other linguistic subtleties (such as difficult words, ambiguity and syntactically complex structures) causing delay in comprehension, or occulomotor control problems forcing readers to spend time adjusting eye-muscles. So, an elevated average fixation duration per word may not sufficiently indicate the presence of sarcasm. But we would also like to share that, for our 1098 I will always cherish the original misconception I had of you Figure 2: Saliency graph of participant P1 for the sentence I will always cherish the original misconception I had of you. dataset, when we considered readability (Flesch readability ease-score (Flesch, 1948)), number of words in a sentence and average character per word along with the sarcasm label as the predictors of average fixation duration following a linear mixed effect model (Barr et al., 2013), sarcasm label turned out to be the most significant predictor with a maximum slope. This indicates that average fixation duration per word has a strong connection with the text being sarcastic, at least in our dataset. We now analyze scanpaths to gain more insights into the sarcasm comprehension process. 4.2 Analysis of Scanpaths Scanpaths are line-graphs that contain fixations as nodes and saccades as edges; the radii of the nodes represent the fixation duration. A scanpath corresponds to a participant’s eye-movement pattern while reading a particular sentence. Figure 1 presents scanpaths of three participants for the sarcastic sentence S1 and the non-sarcastic sentence S2. The x-axis of the graph represents the sequence of words a reader reads, and the y-axis represents a temporal sequence in milliseconds. Consider a sarcastic text containing incongruous phrases A and B. Our qualitative scanpathanalysis reveals that scanpaths with respect to sarcasm processing have two typical characteristics. Often, a long regression - a saccade that goes to a previously visited segment - is observed when a reader starts reading B after skimming through A. In a few cases, the fixation duration on A and B are significantly higher than the average fixation duration per word. In sentence S1, we see long and multiple regressions from the two incongruous phrases “misconception” and “cherish”, and a few instances where phrases “always cherish” and “original misconception” are fixated longer than usual. Such eye-movement behaviors are not seen for S2. Though sarcasm induces distinctive scanpaths like the ones depicted in Figure 1 in the observed examples, presence of such patterns is not sufficient to guarantee sarcasm; such patterns may also possibly arise from literal texts. We believe that a combination of linguistic features, readability of text and features derived from scanpaths would help discriminative machine learning models learn sarcasm better. 5 Features for Sarcasm Detection We describe the features used for sarcasm detection in Table 2. The features enlisted under lexical,implicit incongruity and explicit incongruity are borrowed from various literature (predominantly from Joshi et al. (2015)). These features are essential to separate sarcasm from other forms semantic incongruity in text (for example ambiguity arising from semantic ambiguity or from metaphors). Two additional textual features viz. readability and word count of the text are also taken under consideration. These features are used to reduce the effect of text hardness and text length on the eye-movement patterns. 5.1 Simple Gaze Based Features Readers’ eye-movement behavior, characterized by fixations, forward saccades, skips and regressions, can be directly quantified by simple statistical aggregation (i.e., either computing features for individual participants and then averaging or performing a multi-instance based learning as explained in section 6). Since these eye-movement attributes relate to the cognitive process in reading (Rayner and Sereno, 1994), we consider these as features in our model. Some of these features have been reported by Mishra et al. (2016) for modeling sarcasm understandability of readers. However, as far as we know, these features are being introduced in NLP tasks like textual sarcasm detection for the first time. The values of these features are believed to increase with the increase in the degree of surprisal caused by incongruity in text (except skip count, which will decrease). 5.2 Complex Gaze Based Features For these features, we rely on a graph structure, namely “saliency graphs”, derived from eye-gaze information and word sequences in the text. Constructing Saliency Graphs: For each reader and each sentence, we construct a “saliency graph”, representing the reader’s atten1099 Subcategory Feature Name Type Intent Category: Textual Sarcasm Features, Source: Joshi et. al. Lexical Presence of Unigrams (UNI) Boolean Unigrams in the training corpus Punctuations (PUN) Real Count of punctuation marks Implicit Incongruity Implicit Incongruity (IMP) Boolean Incongruity of extracted implicit phrases (Rilof et.al, 2013) Explicit Incongruity (EXP) Integer Number of times a word follows a word of opposite polarity Largest Pos/Neg Subsequence (LAR) Integer Length of the largest series of words with polarities unchanged Explicit Positive words (+VE) Integer Number of positive words Incongruity Negative words (-VE) Integer Number of negative words Lexical Polarity (LP) Integer Sentence polarity found by supervised logistic regression Category: Cognitive Features. We introduce these features for sarcasm detection. Readability (RED) Real Flesch Readability Ease (Flesch, 1948) score of the sentence Textual Number of Words (LEN) Integer Number of words in the sentence Avg. Fixation Duration (FDUR) Real Sum of fixation duration divided by word count Avg. Fixation Count (FC) Real Sum of fixation counts divided by word count Avg. Saccade Length (SL) Real Sum of saccade lengths (measured by number of words) divided by word count Simple Regression Count (REG) Real Total number of gaze regressions Gaze Skip count (SKIP) Real Number of words skipped divided by total word count Based Count of regressions from second half to first half of the sentence (RSF) Real Number of regressions from second half of the sentence to the first half of the sentence (given the sentence is divided into two equal half of words) Largest Regression Position (LREG) Real Ratio of the absolute position of the word from which a regression with the largest amplitude (number of pixels) is observed, to the total word count of sentence Edge density of the saliency gaze graph (ED) Real Ratio of the number of directed edges to vertices in the saliency gaze graph (SGG) Fixation Duration at Left/Source (F1H, F1S) Real Largest weighted degree (LWD) and second largest weighted degree (SWD) of the SGG considering the fixation duration of word i of edge Eij Complex Fixation Duration at Right/Target (F2H, F2S) Real LWD and SWD of the SGG considering the fixation duration of word j of edge Eij Gaze Forward Saccade Word Count of Source (PSH, PSS) Real LWD and SWD of the SGG considering the number of forward saccades between words i and j of an edge Eij Based Forward Saccade Word Count of Destination (PSDH, PSDS) Real LWD and SWD of the SGG considering the total distance (word count) of forward saccades between words i and j of an edge Eij Regressive Saccade Word Count of Source (RSH, RSS) Real LWD and SWD of the SGG considering the number of regressive saccades between words i and j of an edge Eij Regressive Saccade Word Count of Destination (RSDH, RSDS) Real LWD and SWD of the SGG considering the total distance (word count) of regressive saccades between words i and j of an edge Eij Table 2: The complete set of features used in our system. tion characteristics. A saliency graph for a sentence S for a reader R, represented as G = (V, E), is a graph with vertices (V ) and edges (E) where each vertex v ∈V corresponds to a word in S (may not be unique) and there exists an edge e ∈E between vertices v1 and v2 if R performs at least one saccade between the words corresponding to v1 and v2. Figure 2 shows an example of a saliency graph.A saliency graph may be weighted, but not necessarily connected, for a given text (as there may be words in the given text with no fixation on them). The “complex” gaze features derived from saliency graphs are also motivated by the theory of incongruity. For instance, Edge Density of a saliency graph increases with the number of distinct saccades, which could arise from the complexity caused by presence of sarcasm. Similarly, the highest weighted degree of a graph is expected to be higher, if the node corresponds to a phrase, incongruous to some other phrase in the text. 6 The Sarcasm Classifier We interpret sarcasm detection as a binary classification problem. The training data constitutes 1100 Features P(1) P(-1) P(avg) R(1) R(-1) R(avg) F(1) F(-1) F(avg) Kappa Multi Layered Neural Network Unigram 53.1 74.1 66.9 51.7 75.2 66.6 52.4 74.6 66.8 0.27 Sarcasm (Joshi et. al.) 59.2 75.4 69.7 51.7 80.6 70.4 55.2 77.9 69.9 0.33 Gaze 62.4 76.7 71.7 54 82.3 72.3 57.9 79.4 71.8 0.37 Gaze+Sarcasm 63.4 75 70.9 48 84.9 71.9 54.6 79.7 70.9 0.34 N¨aive Bayes Unigram 45.6 82.4 69.4 81.4 47.2 59.3 58.5 60 59.5 0.24 Sarcasm (Joshi et. al.) 46.1 81.6 69.1 79.4 49.5 60.1 58.3 61.6 60.5 0.25 Gaze 57.3 82.7 73.8 72.9 70.5 71.3 64.2 76.1 71.9 0.41 Gaze+Sarcasm 46.7 82.1 69.6 79.7 50.5 60.8 58.9 62.5 61.2 0.26 Original system by Riloff et.al. : Rule Based with implicit incongruity Ordered 60 30 49 50 39 46 54 34 47 0.10 Unordered 56 28 46 40 42 41 46 33 42 0.16 Original system by Joshi et.al. : SVM with RBF Kernel Sarcasm (Joshi et. al.) 73.1 69.4 70.7 22.6 95.5 69.8 34.5 80.4 64.2 0.21 SVM Linear: with default parameters Unigram 56.5 77 69.8 58.6 75.5 69.5 57.5 76.2 69.6 0.34 Sarcasm (Joshi et. al.) 59.9 78.7 72.1 61.4 77.6 71.9 60.6 78.2 72 0.39 Gaze 65.9 75.9 72.4 49.7 86 73.2 56.7 80.6 72.2 0.38 Gaze+Sarcasm 63.7 79.5 74 61.7 80.9 74.1 62.7 80.2 74 0.43 Multi Instance Logistic Regression: Best Performing Classifier Gaze 65.3 77.2 73 53 84.9 73.8 58.5 80.8 73.1 0.41 Gaze+Sarcasm 62.5 84 76.5 72.6 76.7 75.3 67.2 80.2 75.7 0.47 Table 3: Classification results for different feature combinations. P→Precision, R→Recall, F→F˙score, Kappa→Kappa statistics show agreement with the gold labels. Subscripts 1 and -1 correspond to sarcasm and non-sarcasm classes respectively. Sentence Gold Sarcasm Gaze Gaze+Sarcasm 1. I would like to live in Manchester, England. The transition between Manchester and death would be unnoticeable. S NS S S 2. Helped me a lot with my panic attacks. I took 6 mg a day for almost 20 years. Can’t stop of course but it makes me feel very comfortable. NS S NS NS 3. Forgot to bring my headphones to the gym this morning, the music they play in this gym pumps me up so much! S S NS NS 4. Best show on satellite radio!! No doubt about it. The little doggy company has nothing even close. NS S NS S Table 4: Example test-cases with S and NS representing labels for sarcastic and not-sarcastic respectively. 994 examples created using our eye-movement database for sarcasm detection. To check the effectiveness of our feature set, we observe the performance of multiple classification techniques on our dataset through a stratified 10-fold cross validation. We also compare the classification accuracy of our system and the best available systems proposed by Riloff et al. (2013) and Joshi et al. (2015) on our dataset. Using Weka (Hall et al., 2009) and LibSVM (Chang and Lin, 2011) APIs, we implement the following classifiers: • N¨aive Bayes classifier • Support Vector Machines (Cortes and Vapnik, 1995) with default hyper-paramaters • Multilayer Feed Forward Neural Network • Multi Instance Logistic Regression (MILR) (Xu and Frank, 2004) 6.1 Results Table 3 shows the classification results considering various feature combinations for different classifiers and other systems. These are: • Unigram (with principal components of unigram feature vectors), • Sarcasm (the feature-set reported by Joshi et al. (2015) subsuming unigram features and features from other reported systems) • Gaze (the simple and complex cognitive features we introduce, along with readability and word count features), and • Gaze+Sarcasm (the complete set of features). 1101 70 80 90 100 60 65 70 75 Training data % Avg. F-Score 70 80 90 100 0.2 0.4 Training data % Avg. Kappa (a) (b) Unigrams Gaze Sarcasm Sarcasm+Gaze Figure 3: Effect of training data size on classification in terms of (a) F-score and (b) Kappa statistics 20 40 60 80 100 RED *RDSH* *ED* *FC* *PSS* *RSH* *PSH* +VE *F1S* *SKIP* *SL* *F1H* *RSS* IMP *F2S* UNI LEN *F2H* *LREG* *FDUR* *RDSH* *ED* *FC* *PSS* *RSH* *PSH* *F1S* *SKIP* *SL* *F1H* *RSS* *F2S* *F2H* *LREG* *FDUR* X: Avg. Merit (Chi-squared) Y: Features 20 40 60 80 *RSDH* *RSDS* *ED* *PSS* *FC* *RSH* *F1S* *PSH* *SKIP* +VE *SL* *RSS* *F1H* *F2S* IMP *F2H* UNI LEN *LREG* *FDUR* *RSDH* *RSDS* *ED* *PSS* *FC* *RSH* *F1S* *PSH* *SKIP* *SL* *RSS* *F1H* *F2S* *F2H* *LREG* *FDUR* X: Avg. Merit (InfoGain) Y: Features Figure 4: Significance of features observed by ranking the features using Attribute Evaluation based on Information Gain and Attribute Evaluation based on Chi-squared test. The length of the bar corresponds to the average merit of the feature. Features marked with * are gaze features. For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance “bag” of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag. For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as (Joshi et al., 2015), with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a highprecision but a low recall. To see if the improvement obtained is statistically significant over the state-of-the art system with textual sarcasm features alone, we perform McNemar test. The output of the SVM classifier using only linguistic features used for sarcasm detection by Joshi et al. (2015) and the output of the MILR classifier with the complete set of features are compared, setting threshold α = 0.05. There was a significant difference in the classifier’s accuracy with p(two-tailed) = 0.02 with an odds-ratio of 1.43, showing that the classification accuracy improvement is unlikely to be observed by chance in 95% confidence interval. 6.2 Considering Reading Time as a Cognitive Feature along with Sarcasm Features One may argue that, considering simple measures of reading effort like “reading time” as cognitive feature instead of the expensive eye-tracking features for sarcasm detection may be a cost-effective solution. To examine this, we repeated our experiments with “reading time” considered as the only cognitive feature, augmented with the textual features. The F-scores of all the classifiers turn out to be close to that of the classifiers considering sarcasm feature alone and the difference in the improvement is not statistically significant (p > 0.05). One the other hand, F-scores with gaze features are superior to the F-scores when reading time is considered as a cognitive feature. 6.3 How Effective are the Cognitive Features We examine the effectiveness of cognitive features on the classification accuracy by varying the input training data size. To examine this, we create a 1102 stratified (keeping the class ratio constant) random train-test split of 80%:20%. We train our classifier with 100%, 90%, 80% and 70% of the training data with our whole feature set, and the feature combination from Joshi et al. (2015). The goodness of our system is demonstrated by improvements in F-score and Kappa statistics, shown in Figure 3. We further analyze the importance of features by ranking the features based on (a) Chi squared test, and (b) Information Gain test, using Weka’s attribute selection module. Figure 4 shows the top 20 ranked features produced by both the tests. For both the cases, we observe 16 out of top 20 features to be gaze features. Further, in each of the cases, Average Fixation Duration per Word and Largest Regression Position are seen to be the two most significant features. 6.4 Example Cases Table 4 shows a few example cases from the experiment with stratified 80%-20% train-test split. • Example sentence 1 is sarcastic, and requires extra-linguistic knowledge (about poor living conditions at Manchester). Hence, the sarcasm detector relying only on textual features is unable to detect the underlying incongruity. However, our system predicts the label successfully, possibly helped by the gaze features. • Similarly, for sentence 2, the false sense of presence of incongruity (due to phrases like “Helped me” and “Can’t stop”) affects the system with only linguistic features. Our system, though, performs well in this case also. • Sentence 3 presents a false-negative case where it was hard for even humans to get the sarcasm. This is why our gaze features (and subsequently the complete set of features) account for erroneous prediction. • In sentence 4, gaze features alone falseindicate presence of incongruity, whereas the system predicts correctly when gaze and linguistic features are taken together. From these examples, it can be inferred that, only gaze features would not have sufficed to rule out the possibility of detecting other forms of incongruity that do not result in sarcasm. 6.5 Error Analysis Errors committed by our system arise from multiple factors, starting from limitations of the eyetracker hardware to errors committed by linguistic tools and resources. Also, aggregating various eye-tracking parameters to extract the cognitive features may have caused information loss in the regular classification setting. 7 Conclusion In the current work, we created a novel framework to detect sarcasm, that derives insights from human cognition, that manifests over eye movement patterns. We hypothesized that distinctive eye-movement patterns, associated with reading sarcastic text, enables improved detection of sarcasm. We augmented traditional linguistic features with cognitive features obtained from readers’ eye-movement data in the form of simple gaze-based features and complex features derived from a graph structure. This extended feature-set improved the success rate of the sarcasm detector by 3.7%, over the best available system. Using cognitive features in an NLP Processing system like ours is the first proposal of its kind. Our general approach may be useful in other NLP sub-areas like sentiment and emotion analysis, text summarization and question answering, where considering textual clues alone does not prove to be sufficient. We propose to augment this work in future by exploring deeper graph and gaze features. We also propose to develop models for the purpose of learning complex gaze feature representation, that accounts for the power of individual eye movement patterns along with the aggregated patterns of eye movements. Acknowledgments We thank the members of CFILT Lab, especially Jaya Jha and Meghna Singh, and the students of IIT Bombay for their help and support. References Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. ACL 2014, page 50. Dale J Barr, Roger Levy, Christoph Scheepers, and Harry J Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of memory and language, 68(3):255–278. 1103 C. Christine Camblin, Peter C. Gordon, and Tamara Y. Swaab. 2007. The interplay of discourse congruence and lexical association during sentence processing: Evidence from {ERPs} and eye tracking. Journal of Memory and Language, 56(1):103 – 128. John D Campbell and Albert N Katz. 2012. Are there necessary conditions for inducing a sense of sarcastic irony? Discourse Processes, 49(6):459–480. Paula Carvalho, Lu´ıs Sarmento, M´ario J Silva, and Eug´enio De Oliveira. 2009. Clues for detecting irony in user-generated contents: oh...!! it’s so easy;-). In Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion, pages 53–56. ACM. Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27. Software available at http://www. csie.ntu.edu.tw/∼cjlin/libsvm. Herbert H Clark and Richard J Gerrig. 1984. On the pretense theory of irony. Corinna Cortes and Vladimir Vapnik. 1995. Supportvector networks. Machine learning, 20(3):273–297. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 107–116. Association for Computational Linguistics. Hartmut; Wallington Katie; Page Jemma Filik, Ruth; Leuthold. 2014. Testing theories of irony processing using eye-tracking and erps. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(3):811–828. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221. Raymond W. Gibbs. 1986. Comprehension and memory for nonliteral utterances: The problem of sarcastic indirect requests. Acta Psychologica, 62(1):41 – 57. Rachel Giora. 1995. On irony and negation. Discourse processes, 19(2):239–264. Roberto Gonz´alez-Ib´anez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 581–586. Association for Computational Linguistics. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10– 18. Kenneth Holmqvist, Marcus Nystr¨om, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press. Stacey L Ivanko and Penny M Pexman. 2003. Context incongruity and irony processing. Discourse Processes, 35(3):241–279. Julia Jorgensen, George A Miller, and Dan Sperber. 1984. Test of the mention theory of irony. Journal of Experimental Psychology: General, 113(1):112. Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. Proceedings of 53rd Annual Meeting of the Association for Computational Linguistics, Beijing, China, page 757. Aditya Joshi, Pushpak Bhattacharyya, and Mark James Carman. 2016. Automatic sarcasm detection: A survey. CoRR, abs/1602.03426. Marta Kutas and Steven A Hillyard. 1980. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207(4427):203–205. Christine Liebrecht, Florian Kunneman, and Antal van den Bosch. 2013. The perfect solution for detecting sarcasm in tweets# not. WASSA 2013, page 29. Diana Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis. In Proceedings of LREC. Abhijit Mishra, Diptesh Kanojia, and Pushpak Bhattacharyya. 2016. Predicting readers’ sarcasm understandability by modeling gaze behavior. In Proceedings of AAAI. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Computational Linguistics, page 271. Association for Computational Linguistics. Keith Rayner and Sara C Sereno. 1994. Eye movements in reading: Psycholinguistic studies. Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In EMNLP, pages 704– 714. Xin Xu and Eibe Frank. 2004. Logistic regression and boosting for labeled bags of instances. In Advances in knowledge discovery and data mining, pages 272– 281. Springer. 1104
2016
104
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1105–1116, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures Makoto Miwa Toyota Technological Institute Nagoya, 468-8511, Japan [email protected] Mohit Bansal Toyota Technological Institute at Chicago Chicago, IL, 60637, USA [email protected] Abstract We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional treestructured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the stateof-the-art feature-based model on end-toend relation extraction, achieving 12.1% and 5.7% relative error reductions in F1score on ACE2005 and ACE2004, respectively. We also show that our LSTMRNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components. 1 Introduction Extracting semantic relations between entities in text is an important and well-studied task in information extraction and natural language processing (NLP). Traditional systems treat this task as a pipeline of two separated tasks, i.e., named entity recognition (NER) (Nadeau and Sekine, 2007; Ratinov and Roth, 2009) and relation extraction (Zelenko et al., 2003; Zhou et al., 2005), but recent studies show that end-to-end (joint) modeling of entity and relation is important for high performance (Li and Ji, 2014; Miwa and Sasaki, 2014) since relations interact closely with entity information. For instance, to learn that Toefting and Bolton have an OrganizationAffiliation (ORG-AFF) relation in the sentence Toefting transferred to Bolton, the entity information that Toefting and Bolton are Person and Organization entities is important. Extraction of these entities is in turn encouraged by the presence of the context words transferred to, which indicate an employment relation. Previous joint models have employed feature-based structured learning. An alternative approach to this end-to-end relation extraction task is to employ automatic feature learning via neural network (NN) based models. There are two ways to represent relations between entities using neural networks: recurrent/recursive neural networks (RNNs) and convolutional neural networks (CNNs). Among these, RNNs can directly represent essential linguistic structures, i.e., word sequences (Hammerton, 2001) and constituent/dependency trees (Tai et al., 2015). Despite this representation ability, for relation classification tasks, the previously reported performance using long short-term memory (LSTM) based RNNs (Xu et al., 2015b; Li et al., 2015) is worse than one using CNNs (dos Santos et al., 2015). These previous LSTM-based systems mostly include limited linguistic structures and neural architectures, and do not model entities and relations jointly. We are able to achieve improvements over state-of-the-art models via endto-end modeling of entities and relations based on richer LSTM-RNN architectures that incorporate complementary linguistic structures. Word sequence and tree structure are known to be complementary information for extracting relations. For instance, dependencies between words 1105 are not enough to predict that source and U.S. have an ORG-AFF relation in the sentence “This is ...”, one U.S. source said, and the context word said is required for this prediction. Many traditional, feature-based relation classification models extract features from both sequences and parse trees (Zhou et al., 2005). However, previous RNNbased models focus on only one of these linguistic structures (Socher et al., 2012). We present a novel end-to-end model to extract relations between entities on both word sequence and dependency tree structures. Our model allows joint modeling of entities and relations in a single model by using both bidirectional sequential (left-to-right and right-to-left) and bidirectional tree-structured (bottom-up and top-down) LSTMRNNs. Our model first detects entities and then extracts relations between the detected entities using a single incrementally-decoded NN structure, and the NN parameters are jointly updated using both entity and relation labels. Unlike traditional incremental end-to-end relation extraction models, our model further incorporates two enhancements into training: entity pretraining, which pretrains the entity model, and scheduled sampling (Bengio et al., 2015), which replaces (unreliable) predicted labels with gold labels in a certain probability. These enhancements alleviate the problem of low-performance entity detection in early stages of training, as well as allow entity information to further help downstream relation classification. On end-to-end relation extraction, we improve over the state-of-the-art feature-based model, with 12.1% (ACE2005) and 5.7% (ACE2004) relative error reductions in F1-score. On nominal relation classification (SemEval-2010 Task 8), our model compares favorably to the state-of-the-art CNNbased model in F1-score. Finally, we also ablate and compare our various model components, which leads to some key findings (both positive and negative) about the contribution and effectiveness of different RNN structures, input dependency relation structures, different parsing models, external resources, and joint learning settings. 2 Related Work LSTM-RNNs have been widely used for sequential labeling, such as clause identification (Hammerton, 2001), phonetic labeling (Graves and Schmidhuber, 2005), and NER (Hammerton, 2003). Recently, Huang et al. (2015) showed that building a conditional random field (CRF) layer on top of bidirectional LSTM-RNNs performs comparably to the state-of-the-art methods in the partof-speech (POS) tagging, chunking, and NER. For relation classification, in addition to traditional feature/kernel-based approaches (Zelenko et al., 2003; Bunescu and Mooney, 2005), several neural models have been proposed in the SemEval-2010 Task 8 (Hendrickx et al., 2010), including embedding-based models (Hashimoto et al., 2015), CNN-based models (dos Santos et al., 2015), and RNN-based models (Socher et al., 2012). Recently, Xu et al. (2015a) and Xu et al. (2015b) showed that the shortest dependency paths between relation arguments, which were used in feature/kernel-based systems (Bunescu and Mooney, 2005), are also useful in NN-based models. Xu et al. (2015b) also showed that LSTMRNNs are useful for relation classification, but the performance was worse than CNN-based models. Li et al. (2015) compared separate sequence-based and tree-structured LSTM-RNNs on relation classification, using basic RNN model structures. Research on tree-structured LSTM-RNNs (Tai et al., 2015) fixes the direction of information propagation from bottom to top, and also cannot handle an arbitrary number of typed children as in a typed dependency tree. Furthermore, no RNNbased relation classification model simultaneously uses word sequence and dependency tree information. We propose several such novel model structures and training settings, investigating the simultaneous use of bidirectional sequential and bidirectional tree-structured LSTM-RNNs to jointly capture linear and dependency context for end-toend extraction of relations between entities. As for end-to-end (joint) extraction of relations between entities, all existing models are featurebased systems (and no NN-based model has been proposed). Such models include structured prediction (Li and Ji, 2014; Miwa and Sasaki, 2014), integer linear programming (Roth and Yih, 2007; Yang and Cardie, 2013), card-pyramid parsing (Kate and Mooney, 2010), and global probabilistic graphical models (Yu and Lam, 2010; Singh et al., 2013). Among these, structured prediction methods are state-of-the-art on several corpora. We present an improved, NN-based alternative for the end-to-end relation extraction. 1106 In 1909 , Sidney Yates was born in Chicago . B-PER L-PER word/POS embeddings Bi-LSTM hidden softmax nsubjpass prep pobj Yates born in Chicago PHYS Bi-TreeLSTM hidden softmax Sequence (Entity) Dependency (Relation) LSTM unit dropout tanh tanh dependency embeddings tanh label embeddings embeddings neural net / softmax ・・・ ・・・ Fig. 1: Our incrementally-decoded end-to-end relation extraction model, with bidirectional sequential and bidirectional tree-structured LSTM-RNNs. 3 Model We design our model with LSTM-RNNs that represent both word sequences and dependency tree structures, and perform end-to-end extraction of relations between entities on top of these RNNs. Fig. 1 illustrates the overview of the model. The model mainly consists of three representation layers: a word embeddings layer (embedding layer), a word sequence based LSTM-RNN layer (sequence layer), and finally a dependency subtree based LSTM-RNN layer (dependency layer). During decoding, we build greedy, left-to-right entity detection on the sequence layer and realize relation classification on the dependency layers, where each subtree based LSTM-RNN corresponds to a relation candidate between two detected entities. After decoding the entire model structure, we update the parameters simultaneously via backpropagation through time (BPTT) (Werbos, 1990). The dependency layers are stacked on the sequence layer, so the embedding and sequence layers are shared by both entity detection and relation classification, and the shared parameters are affected by both entity and relation labels. 3.1 Embedding Layer The embedding layer handles embedding representations. nw, np, nd and ne-dimensional vectors v(w), v(p), v(d) and v(e) are embedded to words, part-of-speech (POS) tags, dependency types, and entity labels, respectively. 3.2 Sequence Layer The sequence layer represents words in a linear sequence using the representations from the embedding layer. This layer represents sentential context information and maintains entities, as shown in bottom-left part of Fig. 1. We represent the word sequence in a sentence with bidirectional LSTM-RNNs (Graves et al., 2013). The LSTM unit at t-th word consists of a collection of nls-dimensional vectors: an input gate it, a forget gate ft, an output gate ot, a memory cell ct, and a hidden state ht. The unit receives an n-dimensional input vector xt, the previous hidden state ht−1, and the memory cell ct−1, and calculates the new vectors using the following equations: it = σ  W (i)xt + U(i)ht−1 + b(i) , (1) ft = σ  W (f)xt + U(f)ht−1 + b(f) , ot = σ  W (o)xt + U(o)ht−1 + b(o) , ut = tanh  W (u)xt + U(u)ht−1 + b(u) , ct = it⊙ut + ft⊙ct−1, ht = ot⊙tanh(ct), where σ denotes the logistic function, ⊙denotes element-wise multiplication, W and U are weight matrices, and b are bias vectors. The LSTM unit at t-th word receives the concatenation of word and POS embeddings as its input vector: xt = h v(w) t ; v(p) t i . We also concatenate the hidden state vectors of the two directions’ LSTM units corresponding to each word (denoted as −→ ht and ←− ht) as 1107 its output vector, st = h−→ ht; ←− ht i , and pass it to the subsequent layers. 3.3 Entity Detection We treat entity detection as a sequence labeling task. We assign an entity tag to each word using a commonly used encoding scheme BILOU (Begin, Inside, Last, Outside, Unit) (Ratinov and Roth, 2009), where each entity tag represents the entity type and the position of a word in the entity. For example, in Fig. 1, we assign B-PER and LPER (which denote the beginning and last words of a person entity type, respectively) to each word in Sidney Yates to represent this phrase as a PER (person) entity type. We perform entity detection on top of the sequence layer. We employ a two-layered NN with an nhe-dimensional hidden layer h(e) and a softmax output layer for entity detection. h(e) t = tanh  W (eh)[st; v(e) t−1] + b(eh) (2) yt = softmax  W (ey)h(e) t + b(ey) (3) Here, W are weight matrices and b are bias vectors. We assign entity labels to words in a greedy, left-to-right manner.1 During this decoding, we use the predicted label of a word to predict the label of the next word so as to take label dependencies into account. The NN above receives the concatenation of its corresponding outputs in the sequence layer and the label embedding for its previous word (Fig. 1). 3.4 Dependency Layer The dependency layer represents a relation between a pair of two target words (corresponding to a relation candidate in relation classification) in the dependency tree, and is in charge of relationspecific representations, as is shown in top-right part of Fig. 1. This layer mainly focuses on the shortest path between a pair of target words in the dependency tree (i.e., the path between the least common node and the two target words) since these paths are shown to be effective in relation classification (Xu et al., 2015a). For example, we show the shortest path between Yates and Chicago in the bottom of Fig. 1, and this path well captures the key phrase of their relation, i.e., born in. 1We also tried beam search but this did not show improvements in initial experiments. We employ bidirectional tree-structured LSTMRNNs (i.e., bottom-up and top-down) to represent a relation candidate by capturing the dependency structure around the target word pair. This bidirectional structure propagates to each node not only the information from the leaves but also information from the root. This is especially important for relation classification, which makes use of argument nodes near the bottom of the tree, and our top-down LSTM-RNN sends information from the top of the tree to such near-leaf nodes (unlike in standard bottom-up LSTM-RNNs).2 Note that the two variants of tree-structured LSTM-RNNs by Tai et al. (2015) are not able to represent our target structures which have a variable number of typed children: the Child-Sum Tree-LSTM does not deal with types and the N-ary Tree assumes a fixed number of children. We thus propose a new variant of tree-structured LSTM-RNN that shares weight matrices Us for same-type children and also allows variable number of children. For this variant, we calculate nlt-dimensional vectors in the LSTM unit at t-th node with C(t) children using following equations: it = σ  W (i)xt + X l∈C(t) U(i) m(l)htl + b(i)  , (4) ftk = σ  W (f)xt + X l∈C(t) U(f) m(k)m(l)htl + b(f)  , ot = σ  W (o)xt + X l∈C(t) U(o) m(l)htl + b(o)  , ut = tanh  W (u)xt + X l∈C(t) U(u) m(l)htl + b(u)  , ct = it⊙ut + X l∈C(t) ftl⊙ctl, ht = ot⊙tanh(ct), where m(·) is a type mapping function. To investigate appropriate structures to represent relations between two target word pairs, we experiment with three structure options. We primarily employ the shortest path structure (SPTree), which captures the core dependency path between a target word pair and is widely used in relation classification models, e.g., (Bunescu and 2We also tried to use one LSTM-RNN by connecting the root (Paulus et al., 2014), but preparing two LSTM-RNNs showed slightly better performance in our initial experiments. 1108 Mooney, 2005; Xu et al., 2015a). We also try two other dependency structures: SubTree and FullTree. SubTree is the subtree under the lowest common ancestor of the target word pair. This provides additional modifier information to the path and the word pair in SPTree. FullTree is the full dependency tree. This captures context from the entire sentence. While we use one node type for SPTree, we define two node types for SubTree and FullTree, i.e., one for nodes on shortest paths and one for all other nodes. We use the type mapping function m(·) to distinguish these two nodes types. 3.5 Stacking Sequence and Dependency Layers We stack the dependency layers (corresponding to relation candidates) on top of the sequence layer to incorporate both word sequence and dependency tree structure information into the output. The dependency-layer LSTM unit at the t-th word receives as input xt = h st; v(d) t ; v(e) t i , i.e., the concatenation of its corresponding hidden state vectors st in the sequence layer, dependency type embedding v(d) t (denotes the type of dependency to the parent3), and label embedding v(e) t (corresponds to the predicted entity label). 3.6 Relation Classification We incrementally build relation candidates using all possible combinations of the last words of detected entities, i.e., words with L or U labels in the BILOU scheme, during decoding. For instance, in Fig. 1, we build a relation candidate using Yates with an L-PER label and Chicago with an U-LOC label. For each relation candidate, we realize the dependency layer dp (described above) corresponding to the path between the word pair p in the relation candidate, and the NN receives a relation candidate vector constructed from the output of the dependency tree layer, and predicts its relation label. We treat a pair as a negative relation when the detected entities are wrong or when the pair has no relation. We represent relation labels by type and direction, except for negative relations that have no direction. The relation candidate vector is constructed as the concatenation dp = [↑hpA; ↓hp1; ↓hp2], where ↑hpA is the hidden state vector of the top LSTM 3We use the dependency to the parent since the number of children varies. Dependency types can also be incorporated into m(·), but this did not help in initial experiments. unit in the bottom-up LSTM-RNN (representing the lowest common ancestor of the target word pair p), and ↓hp1, ↓hp2 are the hidden state vectors of the two LSTM units representing the first and second target words in the top-down LSTMRNN.4 All the corresponding arrows are shown in Fig. 1. Similarly to the entity detection, we employ a two-layered NN with an nhr-dimensional hidden layer h(r) and a softmax output layer (with weight matrices W, bias vectors b). h(r) p = tanh  W (rh)dp + b(rh) (5) yp = softmax  W (ry)h(r) t + b(ry) (6) We construct the input dp for relation classification from tree-structured LSTM-RNNs stacked on sequential LSTM-RNNs, so the contribution of sequence layer to the input is indirect. Furthermore, our model uses words for representing entities, so it cannot fully use the entity information. To alleviate these problems, we directly concatenate the average of hidden state vectors for each entity from the sequence layer to the input dp to relation classification, i.e., d′ p = h dp; 1 |Ip1| P i∈Ip1 si; 1 |Ip2| P i∈Ip2 si i (Pair), where Ip1 and Ip2 represent sets of word indices in the first and second entities.5 Also, we assign two labels to each word pair in prediction since we consider both left-to-right and right-to-left directions. When the predicted labels are inconsistent, we select the positive and more confident label, similar to Xu et al. (2015a). 3.7 Training We update the model parameters including weights, biases, and embeddings by BPTT and Adam (Kingma and Ba, 2015) with gradient clipping, parameter averaging, and L2-regularization (we regularize weights W and U, not the bias terms b). We also apply dropout (Srivastava et al., 2014) to the embedding layer and to the final hidden layers for entity detection and relation classification. We employ two enhancements, scheduled sampling (Bengio et al., 2015) and entity pretraining, to alleviate the problem of unreliable prediction of entities in the early stage of training, 4Note that the order of the target words corresponds to the direction of the relation, not the positions in the sentence. 5Note that we do not show this Pair in Fig.1 for simplicity. 1109 and to encourage building positive relation instances from the detected entities. In scheduled sampling, we use gold labels as prediction in the probability of ϵi that depends on the number of epochs i during training if the gold labels are legal. As for ϵi, we choose the inverse sigmoid decay ϵi = k/(k + exp(i/k)), where k(≥1) is a hyper-parameter that adjusts how often we use the gold labels as prediction. Entity pretraining is inspired by (Pentina et al., 2015), and we pretrain the entity detection model using the training data before training the entire model parameters. 4 Results and Discussion 4.1 Data and Task Settings We evaluate on three datasets: ACE05 and ACE04 for end-to-end relation extraction, and SemEval2010 Task 8 for relation classification. We use the first two datasets as our primary target, and use the last one to thoroughly analyze and ablate the relation classification part of our model. ACE05 defines 7 coarse-grained entity types and 6 coarse-grained relation types between entities. We use the same data splits, preprocessing, and task settings as Li and Ji (2014). We report the primary micro F1-scores as well as micro precision and recall on both entity and relation extraction to better explain model performance. We treat an entity as correct when its type and the region of its head are correct. We treat a relation as correct when its type and argument entities are correct; we thus treat all non-negative relations on wrong entities as false positives. ACE04 defines the same 7 coarse-grained entity types as ACE05 (Doddington et al., 2004), but defines 7 coarse-grained relation types. We follow the cross-validation setting of Chan and Roth (2011) and Li and Ji (2014), and the preprocessing and evaluation metrics of ACE05. SemEval-2010 Task 8 defines 9 relation types between nominals and a tenth type Other when two nouns have none of these relations (Hendrickx et al., 2010). We treat this Other type as a negative relation type, and no direction is considered. The dataset consists of 8,000 training and 2,717 test sentences, and each sentence is annotated with a relation between two given nominals. We randomly selected 800 sentences from the training set as our development set. We followed the official task setting, and report the official macro-averaged F1-score (Macro-F1) on the 9 relation types. For more details of the data and task settings, please refer to the supplementary material. 4.2 Experimental Settings We implemented our model using the cnn library.6 We parsed the texts using the Stanford neural dependency parser7 (Chen and Manning, 2014) with the original Stanford Dependencies. Based on preliminary tuning, we fixed embedding dimensions nw to 200, np, nd, ne to 25, and dimensions of intermediate layers (nls, nlt of LSTM-RNNs and nhe, nhr of hidden layers) to 100. We initialized word vectors via word2vec (Mikolov et al., 2013) trained on Wikipedia8 and randomly initialized all other parameters. We tuned hyper-parameters using development sets for ACE05 and SemEval2010 Task 8 to achieve high primary (Micro- and Macro-) F1-scores.9 For ACE04, we directly employed the best parameters for ACE05. The hyperparameter settings are shown in the supplementary material. For SemEval-2010 Task 8, we also omitted the entity detection and label embeddings since only target nominals are annotated and the task defines no entity types. Our statistical significance results are based on the Approximate Randomization (AR) test (Noreen, 1989). 4.3 End-to-end Relation Extraction Results Table 1 compares our model with the state-of-theart feature-based model of Li and Ji (2014)10 on final test sets, and shows that our model performs better than the state-of-the-art model. To analyze the contributions and effects of the various components of our end-to-end relation extraction model, we perform ablation tests on the ACE05 development set (Table 2). The performance slightly degraded without scheduled sampling, and the performance significantly degraded when we removed entity pretraining or removed both (p<0.05). This is reasonable because the model can only create relation instances when both of the entities are found and, without these enhancements, it may get too late to find some relations. Removing label embeddings did not affect 6https://github.com/clab/cnn 7http://nlp.stanford.edu/software/ stanford-corenlp-full-2015-04-20.zip 8https://dumps.wikimedia.org/enwiki/ 20150901/ 9We did not tune the precision-recall trade-offs, but doing so can specifically improve precision further. 10Other work on ACE is not comparable or performs worse than the model by Li and Ji (2014). 1110 Corpus Settings Entity Relation P R F1 P R F1 ACE05 Our Model (SPTree) 0.829 0.839 0.834 0.572 0.540 0.556 Li and Ji (2014) 0.852 0.769 0.808 0.654 0.398 0.495 ACE04 Our Model (SPTree) 0.808 0.829 0.818 0.487 0.481 0.484 Li and Ji (2014) 0.835 0.762 0.797 0.608 0.361 0.453 Table 1: Comparison with the state-of-the-art on the ACE05 test set and ACE04 dataset. Settings Entity Relation P R F1 P R F1 Our Model (SPTree) 0.815 0.821 0.818 0.506 0.529 0.518 −Entity pretraining (EP) 0.793 0.798 0.796 0.494 0.491 0.492* −Scheduled sampling (SS) 0.812 0.818 0.815 0.522 0.490 0.505 −Label embeddings (LE) 0.811 0.821 0.816 0.512 0.499 0.505 −Shared parameters (Shared) 0.796 0.820 0.808 0.541 0.482 0.510 −EP, SS 0.781 0.804 0.792 0.509 0.479 0.494* −EP, SS, LE, Shared 0.800 0.815 0.807 0.520 0.452 0.484** Table 2: Ablation tests on the ACE05 development dataset. * denotes significance at p<0.05, ** denotes p<0.01. Settings Entity Relation P R F1 P R F1 SPTree 0.815 0.821 0.818 0.506 0.529 0.518 SubTree 0.812 0.818 0.815 0.525 0.506 0.515 FullTree 0.806 0.816 0.811 0.536 0.507 0.521 SubTree (-SP) 0.803 0.816 0.810 0.533 0.495 0.514 FullTree (-SP) 0.804 0.817 0.811 0.517 0.470 0.492* Child-Sum 0.806 0.819 0.8122 0.514 0.499 0.506 SPSeq 0.801 0.813 0.807 0.500 0.523 0.511 SPXu 0.809 0.818 0.813 0.494 0.522 0.508 Table 3: Comparison of LSTM-RNN structures on the ACE05 development dataset. the entity detection performance, but this degraded the recall in relation classification. This indicates that entity label information is helpful in detecting relations. We also show the performance without sharing parameters, i.e., embedding and sequence layers, for detecting entities and relations (−Shared parameters); we first train the entity detection model, detect entities with the model, and build a separate relation extraction model using the detected entities, i.e., without entity detection. This setting can be regarded as a pipeline model since two separate models are trained sequentially. Without the shared parameters, both the performance in entity detection and relation classification drops slightly, although the differences are not significant. When we removed all the enhancements, i.e., scheduled sampling, entity pretraining, label embedding, and shared parameters, the performance is significantly worse than SPTree (p<0.01), showing that these enhancements provide complementary benefits to end-to-end relation extraction. Next, we show the performance with different LSTM-RNN structures in Table 3. We first compare the three input dependency structures (SPTree, SubTree, FullTree) for tree-structured LSTM-RNNs. Performances on these three structures are almost same when we distinguish the nodes in the shortest paths from other nodes, but when we do not distinguish them (-SP), the information outside of the shortest path, i.e., 1111 FullTree (-SP), significantly hurts performance (p<0.05). We then compare our tree-structured LSTM-RNN (SPTree) with the Child-Sum treestructured LSTM-RNN on the shortest path of Tai et al. (2015). Child-Sum performs worse than our SPTree model, but not with as big of a decrease as above. This may be because the difference in the models appears only on nodes that have multiple children and all the nodes except for the least common node have one child. We finally show results with two counterparts of sequence-based LSTM-RNNs using the shortest path (last two rows in Table 3). SPSeq is a bidirectional LSTM-RNN on the shortest path. The LSTM unit receives input from the sequence layer concatenated with embeddings for the surrounding dependency types and directions. We concatenate the outputs of the two RNNs for the relation candidate. SPXu is our adaptation of the shortest path LSTM-RNN proposed by Xu et al. (2015b) to match our sequence-layer based model.11 This has two LSTM-RNNs for the left and right subpaths of the shortest path. We first calculate the max pooling of the LSTM units for each of these two RNNs, and then concatenate the outputs of the pooling for the relation candidate. The comparison with these sequence-based LSTM-RNNs indicates that a tree-structured LSTM-RNN is comparable to sequence-based ones in representing shortest paths. Overall, the performance comparison of the LSTM-RNN structures in Table 3 show that for end-to-end relation extraction, selecting the appropriate tree structure representation of the input (i.e., the shortest path) is more important than the choice of the LSTM-RNN structure on that input (i.e., sequential versus tree-based). 4.4 Relation Classification Analysis Results To thoroughly analyze the relation classification part alone, e.g., comparing different LSTM structures, architecture components such as hidden layers and input information, and classification task settings, we use the SemEval-2010 Task 8. This dataset, often used to evaluate NN models for relation classification, annotates only relation-related nominals (unlike ACE datasets), so we can focus cleanly on the relation classification part. 11This is different from the original one in that we use the sequence layer and we concatenate the embeddings for the input, while the original one prepared individual LSTM-RNNs for different inputs and concatenated their outputs. Settings Macro-F1 No External Knowledge Resources Our Model (SPTree) 0.844 dos Santos et al. (2015) 0.841 Xu et al. (2015a) 0.840 +WordNet Our Model (SPTree + WordNet) 0.855 Xu et al. (2015a) 0.856 Xu et al. (2015b) 0.837 Table 4: Comparison with state-of-the-art models on SemEval-2010 Task 8 test-set. Settings Macro-F1 SPTree 0.851 SubTree 0.839 FullTree 0.829∗ SubTree (-SP) 0.840 FullTree (-SP) 0.828∗ Child-Sum 0.838 SPSeq 0.844 SPXu 0.847 Table 5: Comparison of LSTM-RNN structures on SemEval-2010 Task 8 development set. We first report official test set results in Table 4. Our novel LSTM-RNN model is comparable to both the state-of-the-art CNN-based models on this task with or without external sources, i.e., WordNet, unlike the previous best LSTM-RNN model (Xu et al., 2015b).12 Next, we compare different LSTM-RNN structures in Table 5. As for the three input dependency structures (SPTree, SubTree, FullTree), FullTree performs significantly worse than other structures regardless of whether or not we distinguish the nodes in the shortest paths from the other nodes, which hints that the information outside of the shortest path significantly hurts the performance (p<0.05). We also compare our treestructured LSTM-RNN (SPTree) with sequencebased LSTM-RNNs (SPSeq and SPXu) and treestructured LSTM-RNNs (Child-Sum). All these LSTM-RNNs perform slightly worse than our SP12When incorporating WordNet information into our model, we prepared embeddings for WordNet hypernyms extracted by SuperSenseTagger (Ciaramita and Altun, 2006) and concatenated the embeddings to the input vector (the concatenation of word and POS embeddings) of the sequence LSTM. We tuned the dimension of the WordNet embeddings and set it to 15 using the development dataset. 1112 Settings Macro-F1 SPTree 0.851 −Hidden layer 0.839 −Sequence layer 0.840 −Pair 0.844 −Pair, Sequence layer 0.827∗ Stanford PCFG 0.844 +WordNet 0.854 Left-to-right candidates 0.843 Neg. sampling (Xu et al., 2015a) 0.848 Table 6: Model setting ablations on SemEval2010 development set. Tree model, but the differences are small. Overall, for relation classification, although the performance comparison of the LSTM-RNN structures in Table 5 produces different results on FullTree as compared to the results on ACE05 in Table 3, the trend still holds that selecting the appropriate tree structure representation of the input is more important than the choice of the LSTMRNN structure on that input. Finally, Table 6 summarizes the contribution of several model components and training settings on SemEval relation classification. We first remove the hidden layer by directly connecting the LSTM-RNN layers to the softmax layers, and found that this slightly degraded performance, but the difference was small. We then skip the sequence layer and directly use the word and POS embeddings for the dependency layer. Removing the sequence layer13 or entity-related information from the sequence layer (−Pair) slightly degraded performance, and, on removing both, the performance dropped significantly (p<0.05). This indicates that the sequence layer is necessary but the last words of nominals are almost enough for expressing the relations in this task. When we replace the Stanford neural dependency parser with the Stanford lexicalized PCFG parser (Stanford PCFG), the performance slightly dropped, but the difference was small. This indicates that the selection of parsing models is not critical. We also included WordNet, and this slightly improved the performance (+WordNet), but the difference was small. Lastly, for the generation of relation candidates, generating only leftto-right candidates slightly degraded the perfor13Note that this setting still uses some sequence layer information since it uses the entity-related information (Pair). mance, but the difference was small and hence the creation of right-to-left candidates was not critical. Treating the inverse relation candidate as a negative instance (Negative sampling) also performed comparably to other generation methods in our model (unlike Xu et al. (2015a), which showed a significance improvement over generating only left-to-right candidates). 5 Conclusion We presented a novel end-to-end relation extraction model that represents both word sequence and dependency tree structures by using bidirectional sequential and bidirectional tree-structured LSTM-RNNs. This allowed us to represent both entities and relations in a single model, achieving gains over the state-of-the-art, feature-based system on end-to-end relation extraction (ACE04 and ACE05), and showing favorably comparable performance to recent state-of-the-art CNNbased models on nominal relation classification (SemEval-2010 Task 8). Our evaluation and ablation led to three key findings. First, the use of both word sequence and dependency tree structures is effective. Second, training with the shared parameters improves relation extraction accuracy, especially when employed with entity pretraining, scheduled sampling, and label embeddings. Finally, the shortest path, which has been widely used in relation classification, is also appropriate for representing tree structures in neural LSTM models. Acknowledgments We thank Qi Li, Kevin Gimpel, and the anonymous reviewers for dataset details and helpful discussions. References Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. arXiv preprint arXiv:1506.03099. Razvan C Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 724–731. ACL. 1113 Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 551–560, Portland, Oregon, USA, June. ACL. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750, Doha, Qatar, October. ACL. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 594–602, Sydney, Australia, July. ACL. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ace) program – tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC-2004), Lisbon, Portugal, May. European Language Resources Association (ELRA). Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 626–634, Beijing, China, July. ACL. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602–610. Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6645– 6649. IEEE. James Hammerton. 2001. Clause identification with long short-term memory. In Proceedings of the 2001 workshop on Computational Natural Language Learning-Volume 7, page 22. ACL. James Hammerton. 2003. Named entity recognition with long short-term memory. In Walter Daelemans and Miles Osborne, editors, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 172–175. ACL. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2015. Taskoriented learning of word embeddings for semantic relation classification. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 268–278, Beijing, China, July. ACL. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38, Uppsala, Sweden, July. ACL. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Rohit J. Kate and Raymond Mooney. 2010. Joint entity and relation extraction using cardpyramid parsing. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 203–212, Uppsala, Sweden, July. ACL. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR 2015, San Diego, CA, May. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 402–412, Baltimore, Maryland, June. ACL. 1114 Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2304–2314, Lisbon, Portugal, September. ACL. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857–867, Lisbon, Portugal, September. ACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858– 1869, Doha, Qatar, October. ACL. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3–26. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses : An Introduction. Wiley-Interscience, April. Romain Paulus, Richard Socher, and Christopher D Manning. 2014. Global belief recursive neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2888–2896. Curran Associates, Inc. Anastasia Pentina, Viktoriia Sharmanska, and Christoph H. Lampert. 2015. Curriculum learning of multiple tasks. In IEEE Conference on Computer Vision and Pattern Recognition CVPR, pages 5492–5500, Boston, MA, USA, June. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147– 155, Boulder, Colorado, June. ACL. Dan Roth and Wen-Tau Yih, 2007. Global Inference for Entity and Relation Identification via a Linear Programming Formulation. MIT Press. Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 1–6. ACM. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrixvector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211, Jeju Island, Korea, July. ACL. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long shortterm memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566, Beijing, China, July. ACL. Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 536– 540, Lisbon, Portugal, September. ACL. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks 1115 along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1785–1794, Lisbon, Portugal, September. ACL. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1640–1649, Sofia, Bulgaria, August. ACL. Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Coling 2010: Posters, pages 1399–1407, Beijing, China, August. Coling 2010 Organizing Committee. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. The Journal of Machine Learning Research, 3:1083–1106. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 427–434, Ann Arbor, Michigan, June. ACL. A Supplemental Material A.1 Data and Task Settings ACE05 defines 7 coarse-grained entity types: Facility (FAC), Geo-Political Entities (GPE), Location (LOC), Organization (ORG), Person (PER), Vehicle (VEH) and Weapon (WEA), and 6 coarse-grained relation types between entities: Artifact (ART), Gen-Affiliation (GEN-AFF), Org-Affiliation (ORG-AFF), Part-Whole (PARTWHOLE), Person-Social (PER-SOC) and Physical (PHYS). We removed the cts, un subsets, and used a 351/80/80 train/dev/test split. We removed duplicated entities and relations, and resolved nested entities. We used head spans for entities. We follow the settings by (Li and Ji, 2014), and we did not use the full mention boundary unlike Lu and Roth (2015). We use entities and relations to refer to entity mentions and relation mentions in ACE for brevity. ACE04 defines the same 7 coarse-grained entity types as ACE05 (Doddington et al., 2004), but defines 7 coarse-grained relation types: PYS, PERSOC, Employment / Membership / Subsidiary (EMP-ORG), ART, PER/ORG affiliation (OtherAFF), GPE affiliation (GPE-AFF), and Discourse (DISC). We follow the cross-validation setting of Chan and Roth (2011) and Li and Ji (2014). We removed DISC and did 5-fold CV on bnews and nwire subsets (348 documents). We use the same preprocessing and evaluation metrics of ACE05. SemEval-2010 Task 8 defines 9 relation types between nominals ( Cause-Effect, InstrumentAgency, Product-Producer, Content-Container, Entity-Origin, Entity-Destination, ComponentWhole, Member-Collection and Message-Topic), and a tenth type Other when two nouns have none of these relations (Hendrickx et al., 2010). We treat this Other type as a negative relation type, and no direction is considered. The dataset consists of 8,000 training and 2,717 test sentences, and each sentence is annotated with a relation between two given nominals. We randomly selected 800 sentences from the training set as our development set. We followed the official task setting, and report the official macro-averaged F1-score (Macro-F1) on the 9 relation types. A.2 Hyper-parameter Settings Here we show the hyper-parameters and the range tried for the hyper-parameters in parentheses. Hyper-parameters include the initial learning rate (5e-3, 2e-3, 1e-3, 5e-4, 2e-4, 1e-4), the regularization parameter (1e-4, 1e-5, 1e-6, 1e-7), dropout probabilities (0.0, 0.1, 0.2, 0.3, 0.4, 0.5), the size of gradient clipping (1, 5, 10, 50, 100), scheduled sampling parameter k (1, 5, 10, 50, 100), the number of epochs for training and entity pretraining (≤ 100), and the embedding dimension of WordNet hypernym (5, 10, 15, 20, 25, 30). 1116
2016
105
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1117–1126, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A short proof that O2 is an MCFL Mark-Jan Nederhof School of Computer Science University of St Andrews, UK Abstract We present a new proof that O2 is a multiple context-free language. It contrasts with a recent proof by Salvati (2015) in its avoidance of concepts that seem specific to two-dimensional geometry, such as the complex exponential function. Our simple proof creates realistic prospects of widening the results to higher dimensions. This finding is of central importance to the relation between extreme free word order and classes of grammars used to describe the syntax of natural language. 1 Introduction The alphabet of the MIX language has three symbols, a, b and c. A string is in the language if and only if the number of a’s, the number of b’s, and the number of c’s are all the same. A different way of defining the MIX language is as permutation closure of the regular language (abc)∗, as noted by Bach (1981); see also Pullum (1983). If a, b and c represent, say, a transitive verb and its subject and its object, then a string in MIX represents a sentence with any number of triples of these constituents, in a hypothetical language with extreme free word order. This is admittedly rather unlike any actual natural language. Joshi (1985) argued that because of this, grammatical formalisms for describing natural languages should not be capable of generating MIX. He also conjectured that MIX was beyond the generative capacity of one particular formalism, namely the tree adjoining grammars. Several decades passed before Kanazawa and Salvati (2012) finally proved this conjecture. MIX has been studied in the context of several other formalisms. Joshi et al. (1991) showed that MIX is generated by a generalization of tree adjoining grammars that decouples local domination for linear precedence. Boullier (1999) showed that MIX is generated by a range concatenation grammar. Negative results were addressed by Sorokin (2014) for well-nested multiple context-free grammars, and by Capelletti and Tamburini (2009) for a class of categorial grammars. The MIX language is also of interest outside of computational linguistics, e.g. in computational group theory (Gilman, 2005). A considerable advance in the understanding of the MIX language is due to Salvati (2015), who showed that MIX is generated by a multiple context-free grammar (MCFG). The main part of the proof shows that the language O2 is generated by a MCFG. This language has four symbols, a, a, b and b. A string is in the language if and only if the number of a’s equals the number of a’s, and the number of b’s equals the number of b’s. MIX and O2 are rationally equivalent, which means that if one is generated by a multiple context-free grammar, then so is the other. The proof by Salvati (2015) is remarkable, in that it is one of the few examples of geometry being used to prove a statement about formal languages. The proof has two related disadvantages however. The first is that a key element of the proof, that of the complex exponential function, is not immediately understood without background in geometry. The second is that this also seems to restrict the proof technique to two dimensions, and there is no obvious avenue to generalize the result to a variant of MIX with four or five symbols. We hope to remedy this by an alternative, self-contained proof that avoids the complex exponential function. The core idea is a straightforward normalization of paths in two dimensions, which allow simple arguments to lead to a proof by contradiction. We also sketch part of a possible proof in three dimensions. 1117 S(abababba) R(ababab, ba) R(ab, ab) R(a, a) R(b, b) R(ab, ba) R(a, a) R(b, b) (1) (4) (2) (3) (6) (9) (7) (9) Figure 1: Derivation in G. The numbers indicate the rules that were used. 2 Initial problem The MCFG G is defined as: S(xy) ← R(x, y) (1) R(xp, yq) ← R(x, y) R(p, q) (2) R(xp, qy) ← R(x, y) R(p, q) (3) R(xpy, q) ← R(x, y) R(p, q) (4) R(p, xqy) ← R(x, y) R(p, q) (5) R(a, a) ← (6) R(a, a) ← (7) R(b, b) ← (8) R(b, b) ← (9) R(ε, ε) ← (10) For the meaning of MCFGs in general, see Seki et al. (1991); for a closely related formalism, see Vijay-Shanker et al. (1987); see Kallmeyer (2010) for an overview of mildly context-sensitive grammar formalisms. The reader unfamiliar with this literature is encouraged to interpret the rules of the grammar as logical implications, with S and R representing predicates. There is an implicit conjunction between the two occurrences of R in the right-hand side of each of the rules (2) — (5). The symbols x, y, p, q are string-valued variables, with implicit universal quantification that has scope over both left-hand side and right-hand side of a rule. The rules (6) — (10) act as axioms. The symbols a, a, b, b are terminals, and ε denotes the empty string. We can derive S(x) for certain strings x, and R(x, y) for certain strings x and y. Figure 1 presents an example of a derivation. The language generated by G is the set L of strings x such that S(x) can be derived. By induction on the depth of derivations, one can show that if R(x, y), for strings x and y, then xy ∈O2. Thereby, if S(x) then x ∈O2, which means L ⊆O2. The task ahead is to prove that if xy ∈O2, for some x and y, then R(x, y). From this, L = O2 then follows. Let |x| denote the length of string x. For an inductive proof that xy ∈O2 implies R(x, y), the base cases are as follows. If xy ∈O2 and |x| ≤1 and |y| ≤1, then trivially R(x, y) by rules (6) — (10). Furthermore, if we can prove that xy ∈O2, x ̸= ε and y ̸= ε together imply R(x, y), for |xy| = m, for some m, then we may also prove that x′y′ ∈O2 on its own implies R(x′, y′) for |x′y′| = m. To see this, consider m > 0 and z ∈O2 with |z| = m, and write it as z = xy for some x ̸= ε and y ̸= ε. If by assumption R(x, y), then together with R(ε, ε) and rule (4) or (5) we may derive R(xy, ε) or R(ε, xy), respectively. In the light of this, the inductive step merely needs to show that if for some x and y: • xy ∈O2, |x| ≥1, |y| ≥1 and |xy| > 2, and • pq ∈O2 and |pq| < |xy| imply R(p, q), for all p and q, then also R(x, y). One easy case is if x ∈O2 (and thereby y ∈O2) because then we can write x = x1x2 for some x1 ̸= ε and x2 ̸= ε. The inductive hypothesis states that R(x1, x2) and R(ε, y), which imply R(x, y) using rule (4). A second easy case is if x or y has a proper prefix or proper suffix that is in O2. For example, assume there are z1 ̸= ε and z2 ̸= ε such that x = z1z2 and z1 ∈O2. Then we can use the inductive hypothesis on R(z1, ε) and R(z2, y), together with rule (2). At this time, the reader may wish to read Figure 1 from the root downward. First, abababba is divided into a pair of strings, namely ababab and ba. At each branching node in the derivation, a pair of strings is divided into four strings, which are grouped into two pairs of strings, using rules (2) — (5), read from left to right. Rules (2) and (3) divide each left-hand side argument into two parts. Rule (4) divides the first left-hand side argument into three parts, and rule (5) divides the second left-hand side argument into three parts. What remains to show is that if z1z2 ∈O2, z1 /∈ O2 and |z1z2| > 2, and no proper prefix or proper suffix of z1 or of z2 is in O2, then there is at least 1118 one rule that allows us to divide z1 and z2 into four strings altogether, say x, y, p, q, of which at least three are non-empty, such that xy ∈O2. This will then permit use of the inductive hypothesis on R(x, y) and on R(p, q). We can in fact restrict our attention to z′ 1z′ 2 ∈ O2, |z′ 1z′ 2| > 2, and no non-empty substring of z′ 1 or of z′ 2 is in O2, which can be justified as follows. Suppose we have z1 and z2 as in the previous paragraph, and suppose z′ 1 and z′ 2 result from z1 and z2 by exhaustively removing all nonempty substrings that are in O2; note that still |z′ 1z′ 2| > 2. If we can use a rule to divide z′ 1 and z′ 2 into x′, y′, p′, q′, of which at least three are nonempty, such that x′y′ ∈O2, then the same rule can be used to divide z1 and z2 into x, y, p, q with the required properties, which can be found from x′, y′, p′, q′ simply by reintroducing the removed substrings at corresponding positions. 3 Geometrical view We may interpret a string x geometrically in two dimensions, as a path consisting of a series of line segments of length 1, starting in some point (i, j). Every symbol in x, from beginning to end, represents the next line segment in that path; an occurrence of a represents a line segment from the previous point (i, j) to the next point (i + 1, j), a represents a line segment from (i, j) to (i −1, j), b represents a line segment from (i, j) to (i, j +1), and b represents a line segment from (i, j) to (i, j −1). If x ∈O2, then the path is closed, that is, the starting point and the ending point are the same. If we have two strings x and y such that xy ∈O2 and x /∈O2, then this translates to two paths, connecting two distinct points, which together form a closed path. This is illustrated in Figure 2. In the following, we assume a fixed choice of some x and y such that xy ∈O2, |xy| > 2, and no non-empty substring of x or of y is in O2. If we follow the path of x starting in P[0] = (0, 0), then the path ends in some point P[1] = (i, j) such that i is the number of occurrences of a minus the number of occurrences of a and j is the number of occurrences of b minus the number of occurrences of b. This path from P[0] to P[1] will be called A[0]. The path of y from P[1] back to P[0] will be called B[1]. We generalize this by defining for any integer k: P[k] is the point (k · i, k · j), A[k] is the path of x from P[k] to P[k + 1] and B[k] a b a b a b b a −2 −1 0 1 2 1 0 −1 −2 −3 Figure 2: Two strings x = ababab and y = ba together represent a closed path, consisting of a path from (0, 0) to (−1, −1) and a path from (−1, −1) to (0, 0). is the path of y from P[k] to P[k −1]. Where the starting points are irrelevant and only the shapes matter, we talk about paths A and B. Let C be a path, which can be either A[k] or B[k] for some k. We write Q ∈C to denote that Q is a point on C. Let Q = (i, j) ∈C, not necessarily with i and j being integers. We define the path-distance dC(Q) of Q on C to be the length of the path along line segments of C to get from P[k] to Q. In Figure 2, (0, −1) has pathdistance 3 on A[0], as the path on A[0] to reach (0, −1) from P[0] = (0, 0) consists of the line segments represented by the prefix aba of x. Similarly, dA[0]((0.5, −1)) = 2.5. Let C be a path as above and let points Q1, Q2 ∈C be such that dC(Q1) ≤dC(Q2). We define the subpath D = subC(Q1, Q2) to be such that Q ∈D if and only if Q ∈C and dC(Q1) ≤ Q ≤dC(Q2), and dD(Q) = dC(Q) −dC(Q1) for every Q ∈D. For two points Q1 and Q2, the line segment between Q1 and Q2 is denoted by seg(Q1, Q2). The task formulated at the end of Section 2 is accomplished if we can show that at least one of the following must hold: • the angle in P[0] between the beginning of A[0] and that of B[0] is 180◦(Figure 3); • there is a point Q /∈{P[0], P[1]} such that Q ∈A[0] and Q ∈B[1] (Figure 4); • there is a point Q ̸= P[1] such that Q ∈A[0], Q ∈A[1] and dA[0](Q) > dA[1](Q) (Figure 5); or • there is a point Q ̸= P[0] such that Q ∈B[0], Q ∈B[1] and dB[1](Q) > dB[0](Q) (analogous to Figure 5). 1119 a b a b b a a b a a b P[0] P[1] P[−1] 180◦ Figure 3: With x = a b a b b and y = a a b, the beginning of path A[0] and the beginning of (dotted) path B[0] have an 180◦angle in P[0], which implies x and y start with complementing symbols (here a and a; the other possibility is b and b). By applying rule (2), two smaller closed paths result, one of which consists of these two complementing symbols. b a a a b a a b b a P[0] P[1] Figure 4: The paths A[0] and B[1] of x = baaab and y = a ab ba have point (1, 1) in common. Two smaller closed paths result by applying rule (3). a a a b b a a b a a b a P[1] P[0] P[2] Q Figure 5: With x = b b a a b a a b a and y = a a a, the path A[0] and the (dotted) path A[1] have point Q in common, with dA[0](Q) = 6 > dA[1](Q) = 1. By applying rule (4), two smaller closed paths result, one of which is formed by prefix b of length 1 and suffix ab a of length |x| −6 = 3 of x. We will do this through a contradiction that results if we assume: (i) the angle in P[0] between the beginning of A[0] and that of B[0] is not 180◦; (ii) A[0] ∩B[1] = {P[0], P[1]}; (iii) there is no Q ∈(A[0] ∩A[1]) \ {P[1]} such that dA[0](Q) > dA[1](Q); and (iv) there is no Q ∈(B[0] ∩B[1]) \ {P[0]} such that dB[1](Q) > dB[0](Q). In the below, we will refer to these assumptions as the four constraints. 4 Continuous view Whereas paths A and B were initially formed out of line segments of length 1 between points (i, j) with integers i and j, the proof becomes considerably easier if we allow i and j to be real numbers. The benefit lies in being able to make changes to the paths that preserve the four constraints, to obtain a convenient normal form for A and B. If we can prove a contradiction on the normal form, we will have shown that no A and B can exist that satisfy the four constraints. We define, for each integer k, the line ℓ[k], which is perpendicular to the line through P[k] and P[k + 1], and lies exactly half-way between P[k] and P[k + 1]. Much as before, we write Q ∈ℓ[k] to denote that Q is a point on line ℓ[k]. We will consistently draw points ..., P[−1], P[0], P[1], ...in a straight line from left to right. Let C be a path, which can be either A[k′] or B[k′], for some k′, and let Q ∈C. We write from rightC(Q, ℓ[k]) to mean that path C is strictly to the right of ℓ[k] just before reaching Q, or formally, there is some δ > 0 such that each Q′ ∈C with dC(Q) −δ < dC(Q′) < dC(Q) lies strictly to the right of ℓ[k]. The predicates from left, to right, to left are similarly defined. Let Q1, Q2 ∈C ∩ℓ[k], for some k, such that dC(Q1) ≤dC(Q2). We say that C has an excursion from the right between Q1 and Q2 at ℓ[k] if from rightC(Q1, ℓ[k]) and to rightC(Q2, ℓ[k]). This is illustrated in Figure 6: the path is strictly to the right of ℓ[k] just before reaching Q1. From there on it may (but need not) cross over to the left of ℓ[k]. Just after it reaches Q2, it must again be strictly to the right of ℓ[k]. The definition of excursion from the left is symmetric. Note that excursions may be nested; in Figure 6, subC(Q1, Q2) has an excursion at ℓ[k] from the left below Q2. In Figure 6, the pair of points Q1 and R1 will be called a crossing of ℓ[k] from right to left, characterized by Q1, R1 ∈ℓ[k], from rightC(Q1, ℓ[k]), to leftC(R1, ℓ[k]) and subC(Q1, R1) being a line segment. The pair of points R2 and Q2 is a crossing of ℓ[k] from left to right, where the length of seg(R2, Q2) happens to be 0. In much of the following we will simplify the discussion by assuming crossings consist of single points, as in the case of R2 = Q2. However, existence of crossings con1120 ℓ[k −1] ℓ[k] ℓ[k + 1] Q1 R1 R2=Q2 P[k] P[k + 1] P[k + 2] Figure 6: Excursion from the right at ℓ[k]. ℓ[k −1] ℓ[k] ℓ[k + 1] m Q′ 1 Q′ 2 P[k] P[k + 1] P[k + 2] Figure 7: The excursion from Figure 6 truncated in Q′ 1 and Q′ 2 on line m. sisting of line segments of non-zero length, as in the case of Q1 and R1, would not invalidate any of the arguments of the proof. Excursions are the core obstacle that needs to be overcome for our proof. We can truncate an excursion at ℓ[k] by finding a suitable line m that is parallel to ℓ[k], some small distance away from it, between ℓ[k] and P[k + 1] for excursions from the right, and between ℓ[k] and P[k] for excursions from the left. We further need to find points Q′ 1, Q′ 2 ∈C ∩m, where dC(Q′ 1) < dC(Q1) and dC(Q2) < dC(Q′ 2). Because our coordinates no longer need to consist of integers, it is clear that m, Q′ 1 and Q′ 2 satisfying these requirements must exist. The truncation consists in changing subC(Q′ 1, Q′ 2) to become seg(Q′ 1, Q′ 2), as illustrated by Figure 7. Note that if C is say A[k′], for some k′, then changing the shape of C means changing the shape of A[k′′] for any other k′′ as well; the difference between A[k′] and A[k′′] is only in the starting point P[k′] versus P[k′′]. At this time, we must allow for the possibility that for some excursions, no m, Q′ 1 and Q′ 2 can be found with which we can implement a truncation, if we also need to preserve the four constraints and preserve absence of self-intersections. There is a small number of possible causes. First, ℓ[k] Q1 Q2 R1 R2 (a) ℓ[k] Q1 Q2 Q P[k′] (b) ℓ[k] Q1 Q2 Q Q′ (c) Figure 8: (a) Regions (shaded) of an excursion at ℓ[k]; due to additional crossings in R1 and R2, three more excursions exist, each with a smaller area. (b) & (c) If truncation would introduce selfintersection, then either the excursion is filled, with some point P[k′] as in (b), or there is an excursion with smaller area, illustrated by shading in (c). suppose that C = A[k′] and B[k′ + 1] intersects with seg(Q1, Q2). Then B[k′ + 1] may intersect with seg(Q′ 1, Q′ 2) for any choice of m, Q′ 1 and Q′ 2, and thereby no truncation is possible without violating constraint (ii). Similarly, a truncation may be blocked if C = B[k′ + 1] and A[k′] intersects with seg(Q1, Q2). Next, it could be that C = A[k′], while dA[k′](Q1) > dA[k′+1](Q) holds for some Q ∈seg(Q1, Q2) ∩A[k′ + 1], or dA[k′−1](Q) > dA[k′](Q2) holds for some Q ∈ seg(Q1, Q2)∩A[k′−1], either of which potentially blocks a truncation if constraint (iii) is to be preserved. Constraint (iv) has similar consequences. Furthermore, if we need to preserve absence of self-intersections, a truncation may be blocked if dC(Q) < dC(Q1) or dC(Q2) < dC(Q) for some Q ∈seg(Q1, Q2) ∩C. 5 Normal form The regions of an excursion of C between Q1 and Q2 at ℓ[k] are those that are enclosed by (subpaths of) subC(Q1, Q2) and (subsegments of) seg(Q1, Q2), as illustrated by Figure 8(a). The area of the excursion is the surface area of all regions together. We say an excursion is filled if any of its regions contains at least one point P[k′], for some integer k′, otherwise it is said to be unfilled. We say A and B are in normal form if no excursion can be truncated without violating the four constraints or introducing a self-intersection. Sup1121 pose A and B are in normal form, while one or more excursions remain. Let us first consider the unfilled excursions. Among them choose one that has the smallest area. By assumption, one of the four constraints must be violated or a new selfintersection must be introduced, if we were to truncate that excursion. We will consider all relevant cases. Each case will assume an unfilled excursion from the right (excursions from the left are symmetric) of a path C between Q1 and Q2 at ℓ[k]. We may assume that subC(Q1, Q2) ∩ℓ[k] = {Q1, Q2}, as additional crossings of ℓ[k] would mean that excursions exist with smaller areas (cf. Figure 8(a)), contrary to the assumptions. Now assume truncation is blocked due to Q ∈ seg(Q1, Q2) ∩C such that dC(Q) < dC(Q1) (the case dC(Q2) < dC(Q) is symmetric), as we need to preserve absence of self-intersection. Suppose Q is the only such point, so that C crosses seg(Q1, Q2) from left to right once without ever crossing it from right to left, until Q1 is reached. Then C starts in the area of the excursion, or in other words, the excursion is filled, contrary to the assumptions (cf. Figure 8(b)). Now suppose there are points Q′ and Q where C crosses seg(Q1, Q2) from right to left and from left to right, respectively and dC(Q′) < dC(Q) < dC(Q1). If there are several choices, choose Q′ and Q such that subC(Q′, Q) ∩ℓ[k] = {Q′, Q}. This means the excursion between Q′ and Q has an area smaller than the one between Q1 and Q2, contrary to the assumptions (cf. Figure 8(c)). Note that excursions with zero area, that is, those that intersect with ℓ[k] without crossing over to the other side, can always be truncated. We can therefore further ignore non-crossing intersections. Now suppose a truncation would violate constraint (ii), where C = B[k′ + 1] and D = A[k′] crosses seg(Q1, Q2). Then much as above, we may distinguish two cases. In the first, D has only one crossing of seg(Q1, Q2) in some point Q, which means the excursion is filled with the starting or ending point of D, as in Figure 9(a). In the second, D has at least two consecutive crossings, say in Q and Q′, from right to left and from left to right, respectively, which means the excursion between Q and Q′ has smaller area than the one between Q1 and Q2, illustrated by shading in Figure 9(b). Both cases contradict the assumptions. ℓ[k] Q1 C Q2 D Q P[k′] (a) ℓ[k] Q1 C Q2 D Q Q′ (b) Figure 9: Truncating the excursion would introduce a violation of constraint (ii). The assumptions are contradicted in one of two ways. For C = A[k′] and D = B[k′ + 1], the reasoning is symmetric. Next, suppose a truncation would violate constraint (iii), where C = A[k′] and A[k′ −1] crosses seg(Q1, Q2) in Q, while dA[k′−1](Q) > dA[k′](Q2). If the crossing in Q is from right to left, and there is an immediately next crossing in Q′ from left to right, then we have the same situation as in Figure 9(b), involving an excursion with smaller area, contradicting the assumptions. If the crossing in Q is the only one, and it is from right to left, then we can use the fact that subA[k′](Q1, Q2) ∩subA[k′−1](Q, P[k′]) = ∅, as we assume the four constraints as yet hold. This means P[k′] must be contained in the area of the excursion, as illustrated in Figure 10(a), contradicting the assumption that the excursion is unfilled. If the crossing in Q is the only one, and it is from left to right, then we can use the fact that subA[k′](Q1, Q2) ∩subA[k′−1](Q′ 2, Q) = ∅, for the unique Q′ 2 ∈A[k′ −1] ∩ℓ[k −1] such that dA[k′−1](Q′ 2) = dA[k′](Q2). This means the excursion contains Q′ 2, which implies there is another unfilled excursion between points R1, R2 ∈ A[k′] ∩ℓ[k −1] with smaller area, as shaded in Figure 10(b), contrary to the assumptions. Suppose a truncation would violate constraint (iii), where C = A[k′] and A[k′ + 1] crosses seg(Q1, Q2) in Q, while dA[k′](Q1) > dA[k′+1](Q). The reasoning is now largely symmetric to the above, with the direction of the crossing reversed, except that the case analogous to Figure 10(b) is immediately excluded, as Q′ 2 cannot be both to the left and to the right of ℓ[k]. Constraint (iv) is symmetric to constraint (iii). All pos1122 ℓ[k] Q1 A[k′] Q2 A[k′ −1] Q P[k′] (a) ℓ[k] ℓ[k−1] Q1 A[k′] Q2 A[k′ −1] Q Q′ 2 R1 R2 (b) Figure 10: Truncating the excursion would introduce a violation of constraint (iii), where dA[k′−1](Q) > dA[k′](Q2). The assumptions are contradicted in one of three ways, the first as in Figure 9(b), and the second and third as in (a) and (b) above. sible cases have been shown to lead to contradictions, and therefore we conclude that there are no unfilled excursions if A and B are in normal form. We now show that there cannot be any filled excursions either. For this, assume that A[k′] has a filled excursion between Q1 and Q2 at ℓ[k] from the right. This means A[k′ −1] has an identically shaped, filled excursion at ℓ[k −1] from the right, between corresponding points Q′ 1 and Q′ 2. Let us consider how path A[k′] proceeds after reaching Q2. There are only three possibilities: • it ends in P[k + 1], with k′ = k, without further crossings of ℓ[k] or ℓ[k + 1]; • it next crosses ℓ[k] leftward; or • it next crosses ℓ[k + 1] in some point Q3. The first of these can be excluded, in the light of dA[k′−1](Q) ≥ dA[k′](Q2) for each Q ∈ subA[k′−1](Q′ 2, P[k′]). Due to constraint (iii) therefore, this subpath of A[k′ −1] cannot intersect with the excursion of A[k′] to reach P[k], and therefore A[k′] cannot reach P[k +1]. The second possibility is also excluded, as this would imply the existence of an unfilled excursion. For the remaining possibility, Q3 ∈A[k′] ∩ℓ[k + 1] may be lower down than Q2 (in the now familiar view of the points P[0], P[1], . . . being drawn from left to right along a horizontal line), or it may be higher up than Q1. These two cases are drawn in Figures 11 and 12. The choice of Q3 also determines a corresponding Q′ 3 ∈A[k′ −1] ∩ℓ[k]. ℓ[k−1] Q′ 1 Q′ 2 P[k−1] Q′ 3 Q′ 4 ℓ[k] Q1 Q2 P[k] Q3 Q4 ℓ[k+1] P[k+1] ℓ[k+2] P[k+2] Figure 11: Continuing the (solid) path A[k′] after a filled excursion, restricted by the (dashed) path A[k′ −1], in the light of constraint (iii). ℓ[k−1] Q′ 1 Q′ 2 P[k−1] Q′ 3 Q′ 4 ℓ[k] Q1 Q2 P[k] Q3 Q4 ℓ[k+1] P[k+1] ℓ[k+2] P[k+2] Figure 12: As in Figure 11 but Q3 is chosen to be higher up than Q1. We now consider how A[k′] continues after Q3 in the case of Figure 11. If it next crosses ℓ[k + 1] leftward, this would imply the existence of an unfilled excursion. Further, dA[k′−1](Q) ≥ dA[k′](Q3) for each Q ∈subA[k′−1](Q′ 3, P[k′]). Due to constraint (iii) therefore, this subpath of A[k′ −1] cannot intersect with subA[k′](Q2, Q3), above which lies P[k + 1]. Therefore, A[k′] must cross ℓ[k + 2] in some Q4, which is lower down than Q3. This continues ad infinitum, and A[k′] will never reach its supposed end point P[k′ + 1]. The reasoning for Figure 12 is similar. Filled excursions from the left are symmetric, but instead of investigating the path after Q2, we must investigate the path before Q1. The case of B is symmetric to that of A. We may now conclude no filled excursions exist. 6 The final contradiction We have established that after A and B have been brought into normal form, there can be no remaining excursions. This means that A[0] crosses ℓ[0] exactly once, in some point RA, and B[0] crosses ℓ[−1] exactly once, in some point LB. Further, let LA be the unique point where A[−1] crosses ℓ[−1] and RB the unique point where B[1] crosses ℓ[0]. The region of the plane between ℓ[−1] and ℓ[0] 1123 ℓ[−1] ℓ[0] P[−1] P[0] P[1] A[0] B[0] B[1] A[−1] LB LA RA RB Figure 13: The region between ℓ[−1] and ℓ[0] is divided by A[0] and B[0] into a ‘top’ region (lightly shaded), a ‘bottom’ region (white), and areas enclosed by intersections of A[0] and B[0] (darkly shaded). Here A[−1] and B[1] are both in the ‘bottom’ region. can now be partitioned into a ‘top’ region, a ‘bottom’ region, and zero or more enclosed regions. The ‘top’ region consists of those points that are reachable from any point between ℓ[−1] and ℓ[0] arbitrarily far above any point of A[0] and B[0], without intersecting with A[0], B[0], ℓ[−1] or ℓ[0]. This is the lightly shaded region in Figure 13. The ‘bottom’ region is similarly defined, in terms of reachability from any point between ℓ[−1] and ℓ[0] arbitrarily far below any point of A[0] and B[0]. The zero or more enclosed regions stem from possible intersections of A[0] and B[0]; the two such enclosed regions in Figure 13 are darkly shaded. Note that the four constraints do not preclude intersections of A[0] and B[0]. However, constraint (ii) implies that, between ℓ[−1] and ℓ[0], A[0] and B[1] do not intersect other than in P[0], and similarly, A[−1] and B[0] do not intersect other than in P[0]. Moreover, for any Q ∈subA[−1](LA, P[0]) and any Q′ ∈subA[0](P[0], RA) we have dA[−1](Q) ≥ dA[0](Q′). By constraint (iii) this means A[−1] and A[0] do not intersect other than in P[0]. Similarly, B[1] and B[0] do not intersect other than in P[0]. The angles in P[0] between A[0], B[0], A[−1] and B[1] are multiples of 90◦. Because of constraint (i), which excludes a 180◦angle between A[0] and B[0], it follows that either subA[−1](LA, P[0]) and subB[1](RB, P[0]) both lie entirely in the ‘top’ region, or both lie entirely in the ‘bottom’ region. The latter case is illustrated in Figure 13. In the former case, LA and RB are above LB and RA, respectively, and in the latter case LA and RB are below LB and RA. This is impossible, as LA and RA should be at the same height, these being corresponding points of A[−1] and A[0], which have the same shape, and similarly LB and RB should be at the same height. This contradiction now leads back to the very beginning of our proof, and implies that the four constraints cannot all be true, and therefore that at least one rule is always applicable to allow use of the inductive hypothesis, and therefore that G generates O2. 7 Conclusions and outlook We have presented a new proof that O2 is generated by a MCFG. It has at least superficial elements in common with the proof by Salvati (2015). Both proofs use essentially the same MCFG, both are geometric in nature, and both involve a continuous view of paths next to a discrete view. The major difference lies in the approach to tackling the myriad ways in which the paths can wind around each other and themselves. In the case of Salvati (2015), the key concept is that of the complex exponential function, which seems to restrict the proof technique to two-dimensional geometry. In our case, the key concepts are excursions and truncation thereof, and the identification of top and bottom regions. At this time, no proof is within reach that generalizes the result to O3, i.e. the language of strings over an alphabet of six symbols, in which the number of a’s equals the number of a’s, the number of b’s equals the number of b’s, and the number of c’s equals the number of c’s; this language is rationally equivalent to MIX-4, which is defined analogously to MIX, but with four symbols. One may expect however that a proof would use threedimensional geometry and generalize some of the arguments from this paper. Our aim here is to make this plausible, while emphasizing that an actual proof will require a novel framework at least as involved as that presented in the previous sections. Omitting the start rule and the axioms, an obvious candidate MCFG to generate O3 would among others have the three rules: R(p1q1, p2q2, q3p3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1q1, q2p2, p3q3) ←R(p1, p2, p3) R(q1, q2, q3) R(q1p1, p2q2, p3q3) ←R(p1, p2, p3) R(q1, q2, q3) as well as the six rules: 1124 R(p1q1p2, p3q2, q3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1q1p2, q2, p3q3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1q1, p2q2p3, q3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1q1, q2, p2q3p3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1, q1p2q2, q3p3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1, q1p2, q2p3q3) ←R(p1, p2, p3) R(q1, q2, q3) Consider three strings x, y and z such that xyz ∈O3. If we can use any of the above rules to divide these into six strings out of which we can select three, which concatenated together are a non-empty string in O3 shorter than xyz, then we can use the inductive hypothesis, much as in Section 2. For a proof by contradiction, therefore assume that no pair of prefixes of x and y and a suffix of z together form a non-empty string in O3 shorter than xyz, etc., in the light of the first three rules above, and assume that no ’short enough’ prefix of x, a prefix of y and a ’short enough’ suffix of x together form a non-empty string in O3, etc., in the light of the next six rules above. For a geometric interpretation, consider the paths of x, y and z, leading from point P0 = (0, 0, 0) to points Px, Py and Pz, respectively. The concatenations of prefixes of x and y, and similarly those of x and z and those of y and z form three connecting surfaces, together forming one surface dividing the space around P0 into an ‘above’ and a ‘below’; cf. Figure 14. Our assumptions imply that the final parts of the paths of x, y and z from −Px, −Py and −Pz, respectively, to P0 should not intersect with this surface. In addition, no pair of strings from x, y and z should end on complementing symbols, i.e. a and a, b and b, or c and c. This means that the three paths leading towards P0 must all end in P0 strictly ‘above’ or all strictly ‘below’ the surface. This might lead to a contradiction, similar to that in Section 6, but only if one can ensure that none of the three paths to P0 ‘sneak around’ the surface. This is illustrated in Figure 15, where the path of z is ‘entangled’ with a copy of itself. It appears this can be achieved by adding three more rules, namely: R(p1q1p2q2, p3, q3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1, q1p2q2p3, q3) ←R(p1, p2, p3) R(q1, q2, q3) R(p1, q1, q2p2q3p3) ←R(p1, p2, p3) R(q1, q2, q3) The physical interpretation of, say, the last rule seems to be that the path of z from −Pz to P0 can be iteratively shifted such that points other than its ending point coincide with P0. At some stage P0 Px Py Pz −Pz −Py −Px Figure 14: By taking prefixes of two strings from {x, y, z} and concatenating them, we obtain a surface dividing the space around P0 into ‘above’ and ‘below’. Here the path of z from −Pz to P0 ends ‘above’, if our view is from above the surface. P0 Px Py Pz −Pz −Py −Px Figure 15: The path of z from −Pz to P0 is initially above the surface, but ‘sneaks around’ the path of z from Py to −Px, to end below. the shifted path must intersect with the path of z from Py to −Px, before the entanglement of the two paths is broken. The considerable challenges ahead involve finding a suitable definition of ‘excursions’ in three dimensions, and proving that these can be systematically truncated without violating appropriate constraints that preclude application of the above 12 rules. Acknowledgements This work came out of correspondence with Giorgio Satta. Gratefully acknowledged are also fruitful discussions with Sylvain Salvati, Vinodh Rajan, and Markus Pfeiffer. Much appreciated are anonymous referees and editors for their efforts and their courage to consider a theoretical paper for publication at this venue. 1125 References [Bach1981] E. Bach. 1981. Discontinuous constituents in generalized categorial grammars. In Proceedings of the Eleventh Annual Meeting of the North Eastern Linguistic Society, pages 1–12. [Boullier1999] P. Boullier. 1999. Chinese numbers, MIX, scrambling, and Range Concatenation Grammars. In Ninth Conference of the European Chapter of the Association for Computational Linguistics, pages 53–60, Bergen, Norway, June. [Capelletti and Tamburini2009] M. Capelletti and F. Tamburini. 2009. Parsing with polymorphic categorial grammars. Research in Computing Science, 41(2009):87–98. [Gilman2005] R.H. Gilman. 2005. Formal languages and their application to combinatorial group theory. Contemporary Mathematics, 378:1–36. [Joshi et al.1991] A.K. Joshi, K. Vijay-Shanker, and D. Weir. 1991. The convergence of mildly contextsensitive grammar formalisms. In P. Sells, S.M. Shieber, and T. Wasow, editors, Foundational Issues in Natural Language Processing, chapter 2, pages 31–81. MIT Press. [Joshi1985] A.K. Joshi. 1985. Tree adjoining grammars: How much context-sensitivity is required to provide reasonable structural descriptions? In D.R. Dowty, L. Karttunen, and A.M. Zwicky, editors, Natural language parsing: Psychological, computational, and theoretical perspectives, pages 206–250. Cambridge University Press. [Kallmeyer2010] Laura Kallmeyer. 2010. Parsing Beyond Context-Free Grammars. Springer-Verlag. [Kanazawa and Salvati2012] M. Kanazawa and S. Salvati. 2012. MIX is not a tree-adjoining language. In 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 666–674, Jeju Island, Korea, July. [Pullum1983] G.K. Pullum. 1983. Context-freeness and the computer processing of human languages. In 21st Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 1–6, Cambridge, Massachusetts, July. [Salvati2015] S. Salvati. 2015. MIX is a 2-MCFL and the word problem in Z2 is captured by the IO and the OI hierarchies. Journal of Computer and System Sciences, 81:1252–1277. [Seki et al.1991] H. Seki, T. Matsumura, M. Fujii, and T. Kasami. 1991. On multiple context-free grammars. Theoretical Computer Science, 88:191–229. [Sorokin2014] A. Sorokin. 2014. Pumping lemma and Ogden lemma for displacement context-free grammars. In Developments in Language Theory, 18th International Conference, volume 8633 of Lecture Notes in Computer Science, pages 154–165, Ekaterinburg, Russia. Springer-Verlag. [Vijay-Shanker et al.1987] K. Vijay-Shanker, D.J. Weir, and A.K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In 25th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 104–111, Stanford, California, USA, July. 1126
2016
106
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1127–1137, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Context-aware Argumentative Relation Mining Huy V. Nguyen Computer Science Department University of Pittsburgh Pittsburgh, PA 15260 [email protected] Diane J. Litman Computer Science Department and Learning Research and Development Center University of Pittsburgh Pittsburgh, PA 15260 [email protected] Abstract Context is crucial for identifying argumentative relations in text, but many argument mining methods make little use of contextual features. This paper presents contextaware argumentative relation mining that uses features extracted from writing topics as well as from windows of context sentences. Experiments on student essays demonstrate that the proposed features improve predictive performance in two argumentative relation classification tasks. 1 Introduction By supporting tasks such as automatically identifying argument components1 (e.g., premises, claims) in text, and the argumentative relations (e.g., support, attack) between components, argument (argumentation) mining has been studied for applications in different research fields such as document summarization (Teufel and Moens, 2002), opinion mining (Boltuˇzi´c and ˇSnajder, 2014), automated essay evaluation (Burstein et al., 2003), legal information systems (Palau and Moens, 2009), and policy modeling platforms (Florou et al., 2013). Given a pair of argument components with one component as the source and the other as the target, argumentative relation mining involves determining whether a relation holds from the source to the target, and classifying the argumentative function of the relation (e.g., support vs. attack). Ar1There is no consensus yet on an annotation scheme for argument components, or on the minimal textual units to be annotated. We follow Peldszus and Stede (2013) and consider “argument mining as the automatic discovery of an argumentative text portion, and the identification of the relevant components of the argument presented there.” We also borrow their term “argumentative discourse unit” to refer to the textual units (e.g., text segment, sentence, clause) which are considered as argument components. Essay 73. Topic: Is image more powerful than the written word? ...(1)Hence, I agree only to certain degree that in today’s world, image serves as a more effective means of communication[MajorClaim]. (2)Firstly, pictures can influence the way people think[Claim]. (3)For example, nowadays horrendous images are displayed on the cigarette boxes to illustrate the consequences of smoking[Premise]. (4)As a result, statistics show a slight reduction in the number of smokers, indicating that they realize the effects of the negative habit[Premise]... Figure 1: Excerpt from a student persuasive essay (Stab and Gurevych, 2014a). Sentences are numbered and argument components are tagged. gumentative relation mining - beyond argument component mining - is perceived as an essential step towards more fully identifying the argumentative structure of a text (Peldszus and Stede, 2013; Sergeant, 2013; Stab et al., 2014). Consider the second paragraph shown in Figure 1. Only detecting the argument components (a claim in sentence 2 and two premises in sentences 3 and 4) does not give a complete picture of the argumentation. By looking for relations between these components, one can also see that the two premises together justify the claim. The argumentation structure of the text in Figure 1 is illustrated in Figure 2. Our current study proposes a novel approach for argumentative relation mining that makes use of contextual features extracted from surrounding sentences of source and target components as well as from topic information of the writings. 1127 Prior argumentative relation mining studies have often used features extracted from argument components to model different aspects of the relations between the components, e.g., relative distance, word pairs, semantic similarity, textual entailment (Cabrio and Villata, 2012; Stab and Gurevych, 2014b; Boltuˇzi´c and ˇSnajder, 2014; Peldszus and Stede, 2015b). Features extracted from the text surrounding the components have been less explored, e.g., using words and their part-of-speech from adjacent sentences (Peldszus, 2014). The first hypothesis investigated in this paper is that the discourse relations of argument components with adjacent sentences (called context windows in this study, a formal definition is given in §5.3) can help characterize the argumentative relations that connect pairs of argument components. Reconsidering the example in Figure 1, without knowing the content “horrendous images are displayed on the cigarette boxes” in sentence 3, one cannot easily tell that “reduction in the number of smokers” in sentence 4 supports the “pictures can influence” claim in sentence 2. We expect that such content relatedness can be revealed from a discourse analysis, e.g., the appearance of a discourse connective “As a result”. While topic information in many writing genres (e.g., scientific publications, Wikipedia articles, student essays) has been used to create features for argument component mining (Teufel and Moens, 2002; Levy et al., 2014; Nguyen and Litman, 2015), topic-based features have been less explored for argumentative relation mining. The second hypothesis investigated in this paper is that features based on topic context also provide useful information for improving argumentative relation mining. In the excerpt below, knowing that ‘online game’ and ‘computer’ are topically related might help a model decide that the claim in sentence 1 supports the claim in sentence 2: (1)People who are addicted to games, especially online games, can eventually bear dangerous consequences[Claim]. (2)Although it is undeniable that computer is a crucial part of human life[Premise], it still has its bad side[MajorClaim].2 Motivated by the discussion above, we propose context-aware argumentative relation mining – a novel approach that makes use of contextual fea2In this excerpt, the Premise was annotated as an attack to the MajorClaim in sentence 2. MajorClaim(1) Claim(2) Premise(4) Premise(3) Pre Support Support Attack S Support Support Figure 2: Structure of the argumentation in the excerpt. Relations are illustrated accordingly to the annotation provided in the corpus. Premises 3 and 4 were annotated for separate relations to Claim 2. Our visualization should not mislead that the two premises are linked or convergent. tures that are extracted by exploiting context sentence windows and writing topic to improve relation prediction. In particular, we derive features using discourse relations between argument components and windows of their surrounding sentences. We also derive features using an argument and domain word lexicon automatically created by post-processing an essay’s topic model. Experimental results show that our proposed contextual features help significantly improve performance in two argumentative relation classification tasks. 2 Related Work Unlike argument component identification where textual inputs are typically sentences or clauses (Moens et al., 2007; Stab and Gurevych, 2014b; Levy et al., 2014; Lippi and Torroni, 2015), textual inputs of argumentative relation mining vary from clauses (Stab and Gurevych, 2014b; Peldszus, 2014) to multiple-sentences (Biran and Rambow, 2011; Cabrio and Villata, 2012; Boltuˇzi´c and ˇSnajder, 2014). Studying claim justification between user comments, Biran and Rambow (2011) proposed that the argumentation in justification of a claim can be characterized with discourse structure in the justification. They however only considered discourse markers but not discourse relations. Cabrio et al. (2013) conducted a corpus analysis and found certain similarity between Penn Discourse TreeBank relations (Prasad et al., 2008) and argumentation schemes (Walton et al., 2008). However they did not discuss how such similarity could be applied to argument mining. Motivated by these findings, we propose to use features extracted from discourse relations be1128 tween sentences for argumentative relation mining. Moreover, to enable discourse relation features when the textual inputs are only sentences/clauses, we group the inputs with their context sentences. Qazvinian and Radev (2010) used the term “context sentence” to refer to sentences surrounding a citation that contained information about the cited source but did not explicitly cite it. In our study, we only require that the context sentences of an argument component must be in the same paragraph and adjacent to the component. Prior work in argumentative relation mining has used argument component labels to provide constraints during relation identification. For example, when an annotation scheme (e.g., (Peldszus and Stede, 2013; Stab and Gurevych, 2014a)) does not allow relations from claim to premise, no relations are inferred during relation mining for any argument component pair where the source is a claim and the target is a premise. In our work, we follow Stab and Gurevych (2014b) and use the predicted labels of argument components as features during argumentative relation mining. We, however, take advantage of an enhanced argument component model (Nguyen and Litman, 2016) to obtain more reliable argument component labels than in (Stab and Gurevych, 2014b). Argument mining research has studied different data-driven approaches for separating organizational content (shell) from topical content to improve argument component identification, e.g., supervised sequence model (Madnani et al., 2012), unsupervised probabilistic topic models (S´eaghdha and Teufel, 2014; Du et al., 2014). Nguyen and Litman (2015) post-processed LDA (Blei et al., 2003) output to extract a lexicon of argument and domain words from development data. Their semi-supervised approach exploits the topic context through essay titles to guide the extraction. Finally, prior research has explored predicting different argumentative relationship labels between pairs of argument components, e.g., attachment (Peldszus and Stede, 2015a), support vs. non-support (Biran and Rambow, 2011; Cabrio and Villata, 2012; Stab and Gurevych, 2014b), {implicit, explicit}×{support, attack} (Boltuˇzi´c and ˇSnajder, 2014), verifiability of support (Park and Cardie, 2014). Our experiments use two such argumentative relation classification tasks (Support vs. Non-support, Support vs. Attack) to evaluate the effectiveness of our proposed features. 3 Persuasive Essay Corpus Stab and Gurevych (2014a) compiled the Persuasive Essay Corpus consisting of 90 student argumentative essays and made it publicly available.3 Because the corpus has been utilized for different argument mining tasks (Stab and Gurevych, 2014b; Nguyen and Litman, 2015; Nguyen and Litman, 2016), we use this corpus to demonstrate our context-aware argumentative relation mining approach, and adapt the model developed by Stab and Gurevych (2014b) to serve as the baseline for evaluating our proposed approach. Three experts identified possible argument components of three types within each sentence in the corpus (MajorClaim - writer’s stance toward the writing topic, Claim - controversial statements that support or attack MajorClaim, and Premise evidence used to underpin the validity of Claim), and also connected the argument components using two argumentative relations (Support and Attack). According to the annotation manual, each essay has exactly one MajorClaim. A sentence can have one or more argument components (Argumentative sentences). Sentences that do not contain any argument component are labeled Nonargumentative. Figure 1 shows an example essay with components annotated, and Figure 2 illustrates relations between those components. Argumentative relations are directed and can hold between a Premise and another Premise, a Premise and a (Major-) Claim, or a Claim and a MajorClaim. Except for the relation from Claim to MajorClaim, an argumentative relation does not cross paragraph boundaries. The three experts achieved inter-rater accuracy 0.88 for component labels and Krippendorff’s αU 0.72 for component boundaries. Given the annotated argument components, the three experts obtained Krippendorff’s α 0.81 for relation labels. The number of relations are shown in Table 1. 4 Argumentative Relation Tasks 4.1 Task 1: Support vs. Non-support Our first task follows (Stab and Gurevych, 2014b): given a pair of source and target argument components, identify whether the source argumentatively supports the target or not. Note that when a support relation does not hold, the source may attack or has no relation with the target compo3www.ukp.tu-darmstadt.de/data/argumentation-mining 1129 Label #instances Within-paragraph constraint Support 989 Attack 103 No paragraph constraint Support 1312 Attack 161 Table 1: Data statistics of the corpus. nent. For each of two argument components in the same paragraph4, we form two pairs (i.e., reversing source and target). In total we obtain 6330 pairs, in which 989 (15.6%) have Support relation. Among 5341 Non-support pairs, 103 have Attack relation and 5238 are no-relation pairs. Stab and Gurevych (2014b) split the corpus into an 80% training set and a 20% test set which have similar label distributions. We use this split to train and test our proposed models, and directly compare our models’ performance to the reported performance in (Stab and Gurevych, 2014b). 4.2 Task 2: Support vs. Attack To further evaluate the effectiveness of our approach, we conduct an additional task that classifies an argumentative relation as Support or Attack. For this task, we assume that the relation (i.e., attachment (Peldszus, 2014)) between two components is given, and aim at identifying the argumentative function of the relation. Because we remove the paragraph constraint in this task, we obtain more Support relations than in Task 1. As shown in Table 1, of the total 1473 relations, we have 1312 (89%) Support and 161 (11%) Attack relations. Because this task was not studied in (Stab and Gurevych, 2014b), we adapt Stab and Gurevych’s model to use as the baseline. 5 Argumentative Relation Models 5.1 Baseline We adapt (Stab and Gurevych, 2014b) to use as a baseline for evaluating our approach. Given a pair of argument components, we follow (Stab and Gurevych, 2014b) by first extracting 3 feature sets: structural (e.g., word counts, sentence position), lexical (e.g., word pairs, first words), and grammatical production rules (e.g., S→NP,VP). 4Allowing cross-paragraph relations exponentially increases the number of no-relation pairs, which makes the prediction data extremely skewed (Stab and Gurevych, 2014b). Because a sentence may have more than one argument component, the relative component positions might provide useful information (Peldszus, 2014). Thus, we also include 8 new component position features: whether the source and target components are the whole sentences or the beginning/end components of the sentences; if the source is before or after the target component; and the absolute difference of their positions. Stab and Gurevych (2014b) used a 55-discourse marker set to extract indicator features. We expand their discourse maker set by combining them with a 298-discourse marker set developed in (Biran and Rambow, 2011). We expect the expanded set of discourse markers will represent better possible discourse relations in the texts. Stab and Gurevych (2014b) used predicted label of argument components as features for both training and testing their argumentation structure identification model.5 As their predicted labels are not available to us, we adapt this feature set by using the argument component model in (Nguyen and Litman, 2016) which was shown to outperform the corresponding model of Stab and Gurevych. For later presentation purposes, we name the set of all features from this section except word pairs and production rules as the common features. While word pairs and grammatical production rules were the most predictive features in (Stab and Gurevych, 2014b), we hypothesize that this large and sparse feature space may have negative impact on model robustness (Nguyen and Litman, 2015). Most of our proposed models replace word pairs and production rules with different combinations of new contextual features. 5.2 Topic-context Model Our first proposed model (TOPIC) makes use of Topic-context features derived from a lexicon of argument and domain words for persuasive essays (Nguyen and Litman, 2015). Argument words (e.g., ‘believe’, ‘opinion’) signal the argumentative content and are commonly used across different topics. In contrast, domain words are specific terminologies commonly used within the topic (e.g., ‘art’, ‘education’). The authors first use 5Stab and Gurevych (2014b) reported that including goldstandard labels of argument component in both training and testing phases yielded results close to human performance. Our preliminary experiment showed that including goldstandard argument component labels in training did not help when predicted labels were used in the test set. 1130 topic prompts in development data of unannotated persuasive essays to semi-automatically collect argument and domain seed words. In particular, they used 10 argument seed words: agree, disagree, reason, support, advantage, disadvantage, think, conclusion, result, opinion. Domain seed words are those in the topic prompts but not argument seed words or stop words. The seeds words are then used to supervise an automated extraction of argument and domain words from output of LDA topic model (Blei et al., 2003) on the development data. The extracted lexicon consists of 263 (stemmed) argument words and 1806 (stemmed) domain words mapped to 36 LDA topics.6 All argument words are from a single LDA topic while a domain word can map to multiple LDA topics (except the topic of argument words). Using the lexicon, we extract the following Topic-context features: Argument word: from all word pairs extracted from the source and target components, we remove those that have at least one word not in the argument word list. Each argument word pair defines a boolean feature indicating its presence in the argument component pair. We also include each argument word of the source and target components as a boolean feature which is true if the word is present in the corresponding component. We count number of common argument words, the absolute difference in number of argument words between source and target components. Domain word count: to measure the topic similarity between the source and target components, we calculate number of common domain words, number of pairs of two domain words that share an LDA topic, number of pairs that share no LDA topic, and the absolute difference in number of domain words between the two components. Non-domain MainVerb-Subject dependency: we extract MainVerb-Subject dependency triples, e.g., nsubj(belive, I), from the source and target components, and filter out triples that involve domain words. We model each extracted triple as a boolean feature which is true if the corresponding argument component has the triple. Finally, we include the common feature set. To illustrate the Topic-context features, consider the following source and target components. Argument words are in boldface, and domain 6An LDA topic is simply represented by a number, and should not be misunderstood with essay topics. words are in italic. Essay 54. Topic: museum and art gallery will disappear soon? Source: more and more people can watch exhibitions through television or internet at home due to modern technology[Premise] Target: some people think museums and art galleries will disappear soon[Claim] An argument word pair is people-think. There are 35 pairs of domain words. A pair of two domain words that share an LDA topic is exhibitionsart. A pair of two domain words that do not share any LDA topic is internet-galleries. 5.3 Window-context Model Our second proposed model (WINDOW) extracts features from discourse relations and common words between context sentences in the context windows of the source and target components. Definition. Context window of an argument component is a text segment formed by neighboring sentences and the covering sentence of the component. The neighboring sentences are called context sentences, and must be in the same paragraph with the component. In this study, context windows are determined using window-size heuristics.7 Given a windowsize n, we form a context window by grouping the covering sentence with at most n adjacently preceding and n adjacently following sentences that must be in the same paragraph. To minimize noise in feature space, we require that context windows of the source and target components must be mutually exclusive. Biran and Rambow (2011) observed that the relation between a source argument and a target argument is usually instantiated by some elaboration/justification provided in a support of the source argument. Therefore we prioritize the context window of source component when it overlaps with the target context window. Particularly, we keep overlapping context sentences in the source window, and remove them from the target window. For example, with window-size 1, context windows of the Claim in sentence 2 in Figure 1 and the Premise in sentence 4 overlap at sentence 3. When the Claim is set as source component, its 7Due to the paragraph constraint and window overlapping, window-size does not indicate the actual context window size. However, window-size tells what the maximum size a window can have. 1131 BASELINE Common features Word pairs + Production rules TOPIC Common features Topic context features + Window context features WINDOW Common features Window context features COMBINED Common features Topic context features + Window context features + Word pairs + Production rules FULL Common features Topic context features Figure 3: Features used in different models. Feature change across models are denoted by connectors. context window includes sentences {2, 3}, and the Premise as a target has context window with only sentence 4. We extract three Window-context feature sets from the context windows: Common word: as common word counts between adjacent sentences were shown useful for argument mining (Nguyen and Litman, 2016), we count common words between the covering sentence with preceding context sentences, and with following context sentences, for source and target components. Discourse relation: for both source and target components, we extract discourse relations between context sentences, and within the covering sentence. We also extract discourse relations between each pair of source context sentence and target context sentence. Each relation defines a boolean feature. We extract both Penn Discourse Treebank (PDTB) relations (Prasad et al., 2008) and Rhetorical Structure Theory Discourse Treebank (RST-DTB) relations (Carlson et al., 2001) using publicly available discourse parsers (Ji and Eisenstein, 2014; Wang and Lan, 2015). Each PDTB relation has sense label defined in a 3-layered (class, type, subtype), e.g., CONTINGENCY.Cause.result. While there are only four semantic class labels at the class-level which may not cover well different aspects of argumentative relation, subtype-level output is not available given the discourse parser we use. Thus, we use relations at type-level as features. For RSTDTB relations, we use only relation labels, but ignore the nucleus and satellite labels of components as they do not provide more information given the component order in the pair. Because temporal relations were shown not helpful for argument mining tasks (Biran and Rambow, 2011; Stab and Gurevych, 2014b), we exclude them here. Discourse marker: while the baseline model only considers discourse markers within the argument components, we define a boolean feature for each discourse marker classifying whether the marker is present before the covering sentence of the source and target components or not. This implementation aims to characterize the discourse of the preceding and following text segments of each argument component separately. Finally, we include the common feature set. 5.4 Combined Model While Window-context features are extracted from surrounding text of the argument components, which exploits the local context, the Topic-context features are an abstraction of topicdependent information, e.g., domain words are defined within the context of topic domain (Nguyen and Litman, 2015), and thus make use of the global context of the topic domain. We believe that local and global context information represent complementary aspects of the relation between argument components. Thus, we expect to achieve the best performance by combining Window-context and Topic-context models. 5.5 Full Model Finally, the FULL model includes all features in BASELINE and COMBINED models. That is, the FULL model is the COMBINED model plus word pairs and production rules. A summary of all models is shown in Figure 3. 6 Experiments 6.1 Task 1: Support vs. Non-support Tuning Window-size Parameter Because our WINDOW model uses a windowsize parameter to form context windows of the source and target argument components, we investigate how the window-size of the context window impacts the prediction performance of the Window-context features. We set up a model with only Window-context features and determine the 1132 REPORTED BASELINE TOPIC WINDOW COMBINED FULL Accuracy 0.863 0.869 0.857 0.857 0.870 0.877 Kappa – 0.445 0.407 0.449 0.507* 0.481 Macro F1 0.722 0.722 0.703 0.724 0.753* 0.739 Macro Precision 0.739 0.758 0.728 0.729 0.754 0.777 Macro Recall 0.705 0.699 0.685 0.720 0.752* 0.715 F1:Support 0.519 0.519 0.488 0.533 0.583* 0.550 F1:Non-support 0.920 0.925 0.917 0.916* 0.923 0.929 Table 2: Support vs. Non-support classification performances on test set. Best values are in bold. Values smaller than baseline are underlined. * indicates significantly different from the baseline (p < 0.05). 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.7 0 1 2 3 4 5 6 7 8 Window-size F1 score of Window-context Features Figure 4: Performance of Window-context feature set by window-size. window-size in range [0, 8]8 that yields the best F1 score in 10-fold cross validation. We use the training set as determined in (Stab and Gurevych, 2014b) to train/test9 the models using LibLINEAR algorithm (Fan et al., 2008) without parameter or feature optimization. Cross-validations are conducted using Weka (Hall et al., 2009). We use Stanford parser (Klein and Manning, 2003) to perform text processing. As shown in Figure 4, while increasing the window-size from 2 to 3 improves performance (significantly), using window-sizes greater than 3 does not gain further improvement. We hypothesize that after a certain limit, larger context windows will produce more noise than helpful information for the prediction. Therefore, we set the window-size to 3 in all of our experiments involving Window-context model (all with a separate test set). 8Windows-size 0 means covering sentence is the only context sentence. We experimented with not using context sentence at all and obtained worse performance. Our data does not have context window with window-size 9 or larger. 9Note that via cross validation, in each fold some of our training set serves as a development set. Performance on Test Set We train all models using the training set and report their performances on the test set in Table 2. We also compare our baseline to the reported performance (REPORT) for Support vs. Non-support classification in (Stab and Gurevych, 2014b). The learning algorithm with parameters are kept the same as in the window-size tuning experiment. Given the skewed class distribution of this data, Accuracy and F1 of Non-support (the major class) are less important than Kappa, F1, and F1 of Support (the minor class). To conduct T-tests for performance significance, we split the test data into subsets by essays’ ID, and record prediction performance for individual essays. We first notice that the performances of our baseline model are better than (or equal to) REPORTED, except the Macro Recall. We reason that these performance disparities may be due to the differences in feature extractions between our implementation and Stab and Gurevych’s, and also due to the minor set of new features (e.g., new predicted labels, expanded marker set, component position) that we added in our baseline. Comparing proposed models with BASELINE, we see that WINDOW, COMBINED, and FULL models outperform BASELINE in important metrics: Kappa, F1, Recall, but TOPIC yields worse performances than BASELINE. However, the fact that COMBINED outperforms BASELINE, especially with significantly higher Kappa, F1, Recall, and F1:Support, has shown the value of Topiccontext features. While Topic-context features alone are not effective, they help improve WINDOW model which supports our hypothesis that Topic-context and Window-context features are complementary aspects of context, and they together obtain better performance. Comparing our proposed TOPIC, WINDOW, 1133 BASELINE TOPIC WINDOW COMBINED FULL Accuracy 0.885 0.886 0.872 0.885 0.887 Kappa 0.245 0.305* 0.306* 0.342* 0.274* Macro F1 0.618 0.651* 0.652* 0.670* 0.634* Macro Precision 0.680 0.692 0.663 0.697 0.693 Macro Recall 0.595 0.628* 0.644* 0.652* 0.609* F1:Support 0.937 0.937 0.928* 0.936 0.938 F1:Attack 0.300 0.365* 0.376* 0.404* 0.330* Table 3: 5×10-fold cross validation performance of Support vs. Attack classification. * indicates significantly different from the baseline (p < 0.01). COMBINED models with each other shows that COMBINED obtains the best performance while TOPIC performs the worst, which reveals that Topic-context feature set is less effective than Window-context set. While FULL model achieves the best Accuracy, Precision, and F1:Non-support, it has lower performance than COMBINED model in important metrics: Kappa, F1, F1:Support. We reason that the noise caused by word pairs and production rules even dominate the effectiveness of Topic-context and Window-context features, which degrades the overall performance. Overall, by combining TOPIC and WINDOW models, we obtain the best performance. Most notably, we obtain the highest improvement in F1:Support, and have the best balance between Precision and Recall values among all models. These reveal that our contextual features not only dominate generic features like word pairs and production rules, but also are effective to predict minor positive class (i.e., Support). 6.2 Task 2: Support vs. Attack To evaluate the robustness of our proposed models, we conduct an argumentative relation classification experiment that classifies a relation as Support or Attack. Because this task was not studied in (Stab and Gurevych, 2014b) and the training/test split for Support vs. Not task is not applicable here, we conduct 5×10-fold cross validation. We do not optimize the window-size parameter of the WINDOW model, and use the value 3 as set up before. Average prediction performance of all models are reported in Table 3. Comparing our proposed models with the baseline shows that all of our proposed models significantly outperform the baseline in important metrics: Kappa, F1, F1:Attack. More notably than in the Support vs. Non-support classification, all of our proposed models predict the minor class (Attack) significantly more effectively than the baseline. The baseline achieves significantly higher F1:Support than WINDOW model. However, F1:Support of the baseline is in a tie with TOPIC, COMBINED, and FULL. Comparing our proposed models, we see that TOPIC and WINDOW models reveal different behaviors. TOPIC model has significantly higher Precision and F1:Support, and significantly lower Recall and F1:Attack than WINDOW. Moreover, WINDOW model has slightly higher Kappa, F1, but significantly lower Accuracy. These comparisons indicate that Topic-context and Windowcontext features are equally effective but impact differently to the prediction. The different nature between these two feature sets is clearer than in the prior experiment, as now the classification involves classes that are more semantically different, i.e., Support vs. Attack. We recall that TOPIC model performs worse than WINDOW model in Support vs. Non-support task. Our FULL model performs significantly worse than all of TOPIC, WINDOW, and COMBINED in Kappa, F1, Recall, and F1:Attack. Along with results from Support vs. Non-support task, this further suggests that word pairs and production rules are less effective and cannot be combined well with our contextual features. Despite the fact that the Support vs. Attack task (Task 2) has smaller and more imbalanced data than the Support vs. Non-support (Task 1), our proposed contextual features seem to add even more value in Task 2 compared to Task 1. Using Kappa to roughly compare prediction performance across the two tasks, we observe a greater performance improvement from Baseline to Combined model in Task 2 than in Task 1. This is an evidence that our proposed context-aware features 1134 work well even in a more imbalanced with smaller data classification task. The lower performance values of all models in Support vs. Attack than in Support vs. Non-support indirectly suggest that Support vs. Attack classification is a more difficult task. We hypothesize that the difference between support and attack exposes a deeper semantic relation than that between support and no-relation. We plan to extract textual text similarity and textual entailment features (Cabrio and Villata, 2012; Boltuˇzi´c and ˇSnajder, 2014) to investigate this hypothesis in our future work. 7 Conclusions and Future Work In this paper, we have presented context-aware argumentative relation mining that makes use of contextual features by exploiting information from topic context and context sentences. We have explored different ways to incorporate our proposed features with baseline features used in a prior study, and obtained insightful results about feature effectiveness. Experimental results show that Topic-context and Window-context features are both effective but impact predictive performance measures differently. In addition, predicting an argumentative relation will benefit most from combining these two set of features as they capture complementary aspects of context to better characterize the argumentation in justification. The results obtained in this preliminary study are promising and encourage us to explore more directions to enable contextual features. Our next step will investigate uses of topic segmentation to identify context sentences and compare this linguistically-motivated approach to our current window-size heuristic. We plan to follow prior research on graph optimization to refine the argumentation structure and improve argumentative relation prediction. Also, we will apply our contextaware argumentative relation mining to different argument mining corpora to further evaluate its generality. Acknowledgments This research is supported by NSF Grant 1122504. We thank the reviewers for their helpful feedback. We also thank Christian Stab for providing us the data split for the first experiment. References Or Biran and Owen Rambow. 2011. Identifying Justifications in Written Dialogs by Classifying Text as Argumentative. International Journal of Semantic Computing, 5(4):363–381. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. The Journal of Machine Learning Research, 3:993–1022. Filip Boltuˇzi´c and Jan ˇSnajder. 2014. Back up your Stance: Recognizing Arguments in Online Discussions. In Proceedings of the First Workshop on Argumentation Mining, pages 49–58, Baltimore, Maryland, June. Association for Computational Linguistics. Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the WRITE Stuff: Automatic Identification of Discourse Structure in Student Essays. IEEE Intelligent Systems, 18(1):32–39, January. Elena Cabrio and Serena Villata. 2012. Combining Textual Entailment and Argumentation Theory for Supporting Online Debates Interactions. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2, ACL ’12, pages 208–212, Stroudsburg, PA, USA. Association for Computational Linguistics. Elena Cabrio, Sara Tonelli, and Serena Villata. 2013. From Discourse Analysis to Argumentation Schemes and Back: Relations and Differences. In Computational Logic in Multi-Agent Systems, volume 8143 of Lecture Notes in Computer Science, pages 1–17. Springer Berlin Heidelberg. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue - Volume 16, SIGDIAL ’01, pages 1–10, Stroudsburg, PA, USA. Association for Computational Linguistics. Jianguang Du, Jing Jiang, Liu Yang, Dandan Song, and Lejian Liao. 2014. Shell Miner: Mining Organizational Phrases in Argumentative Texts in Social Media. In Proceedings of the 2014 IEEE International Conference on Data Mining, ICDM ’14, pages 797– 802, Washington, DC, USA. IEEE Computer Society. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. The Journal of Machine Learning Research, 9:1871–1874, June. Eirini Florou, Stasinos Konstantopoulos, Antonis Koukourikos, and Pythagoras Karampiperis. 2013. Argument extraction for supporting public policy formulation. In Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social 1135 Sciences, and Humanities, pages 49–54, Sofia, Bulgaria, August. Association for Computational Linguistics. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. ACM SIGKDD Explorations Newsletter, 11(1):10–18, November. Yangfeng Ji and Jacob Eisenstein. 2014. Representation Learning for Text-level Discourse Parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13–24, Baltimore, Maryland, June. Association for Computational Linguistics. Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 423–430. Association for Computational Linguistics. Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context Dependent Claim Detection. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1489– 1500, Dublin, Ireland, August. Marco Lippi and Paolo Torroni. 2015. Contextindependent Claim Detection for Argument Mining. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, pages 185–191, Buenos Aires, Argentina. AAAI Press. Nitin Madnani, Michael Heilman, Joel Tetreault, and Martin Chodorow. 2012. Identifying HighLevel Organizational Elements in Argumentative Discourse. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 20–28, Montr`eal, Canada. Association for Computational Linguistics. Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic Detection of Arguments in Legal Texts. In Proceedings of the 11th International Conference on Artificial Intelligence and Law, ICAIL ’07, pages 225–230, New York, NY, USA. ACM. Huy Nguyen and Diane Litman. 2015. Extracting Argument and Domain Words for Identifying Argument Components in Texts. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 22–28, Denver, CO, June. Association for Computational Linguistics. Huy Nguyen and Diane Litman. 2016. Improving argument mining in student essays by learning and exploiting argument indicators versus essay topics. In Proceedings 29th International FLAIRS Conference, Key Largo, FL, May. Raquel Mochales Palau and Marie-Francine Moens. 2009. Argumentation Mining: The Detection, Classification and Structure of Arguments in Text. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL ’09, pages 98–107, New York, NY, USA. ACM. Joonsuk Park and Claire Cardie. 2014. Identifying Appropriate Support for Propositions in Online User Comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29–38, Baltimore, Maryland, June. Association for Computational Linguistics. Andreas Peldszus and Manfred Stede. 2013. From Argument Diagrams to Argumentation Mining in Texts: A Survey. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), 7(1):1–31, January. Andreas Peldszus and Manfred Stede. 2015a. Joint prediction in MST-style discourse parsing for argumentation mining. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 938–948, Lisbon, Portugal, September. Association for Computational Linguistics. Andreas Peldszus and Manfred Stede. 2015b. Towards Detecting Counter-considerations in Text. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 104–109, Denver, CO, June. Association for Computational Linguistics. Andreas Peldszus. 2014. Towards segment-based recognition of argumentation structure in short texts. In Proceedings of the First Workshop on Argumentation Mining, pages 88–97, Baltimore, Maryland, June. Association for Computational Linguistics. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC-08), Marrakech, Morocco, May. European Language Resources Association (ELRA). ACL Anthology Identifier: L08-1093. Vahed Qazvinian and Dragomir R. Radev. 2010. Identifying Non-explicit Citing Sentences for Citationbased Summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 555–564, Stroudsburg, PA, USA. Association for Computational Linguistics. Diarmuid ´O S´eaghdha and Simone Teufel. 2014. Unsupervised learning of rhetorical structure with untopic models. In Proceedings of the 25th International Conference on Computational Linguistics (COLING-14), Dublin, Ireland. Alan Sergeant. 2013. Automatic argumentation extraction. In The 10th European Semantic Web 1136 Conference, pages 656–660, Montpellier, France. Springer. Christian Stab and Iryna Gurevych. 2014a. Annotating Argument Components and Relations in Persuasive Essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1501–1510, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Christian Stab and Iryna Gurevych. 2014b. Identifying Argumentative Discourse Structures in Persuasive Essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 46–56, Doha, Qatar. Association for Computational Linguistics. Christian Stab, Christian Kirschner, Judith EckleKohler, and Iryna Gurevych. 2014. Argumentation Mining in Persuasive Essays and Scientific Articles from the Discourse Structure Perspective. In Elena Cabrio, Serena Villata, and Adam Wyner, editors, Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and Natural Language Processing, pages 40–49, Bertinoro, Italy, July. CEUR-WS. Simone Teufel and Marc Moens. 2002. Summarizing Scientific Articles: Experiments with Relevance and Rhetorical Status. Computational Linguistics, 28(4), December. Douglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cambridge University Press. Jianxiang Wang and Man Lan. 2015. A Refined End-to-End Discourse Parser. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task, pages 17–24, Beijing, China, July. Association for Computational Linguistics. 1137
2016
107
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1138–1147, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Leveraging Inflection Tables for Stemming and Lemmatization Garrett Nicolai and Grzegorz Kondrak Department of Computing Science University of Alberta {nicolai,gkondrak}@ualberta.ca Abstract We present several methods for stemming and lemmatization based on discriminative string transduction. We exploit the paradigmatic regularity of semi-structured inflection tables to identify stems in an unsupervised manner with over 85% accuracy. Experiments on English, Dutch and German show that our stemmers substantially outperform Snowball and Morfessor, and approach the accuracy of a supervised model. Furthermore, the generated stems are more consistent than those annotated by experts. Our direct lemmatization model is more accurate than Morfette and Lemming on most datasets. Finally, we test our methods on the data from the shared task on morphological reinflection. 1 Introduction Many languages contain multiple inflected forms that correspond to the same dictionary word. Inflection is a grammatical procedure that has little impact on the meaning of the word. For example, the German words in Table 1 all refer to the action of giving. When working with these languages, it is often beneficial to establish a consistent representation across a set of inflections. This is the task that we address here. There are two principal approaches to inflectional simplification: stemming and lemmatization. Stemming aims at removing inflectional affixes from a word form. It can be viewed as a kind of word segmentation, in which the boundaries of the stem are identified within the word; no attempt is made to restore stem changes that may occur as part of the inflection process. The goal of lemmatization is to map any inflected form to its unique lemma, which is typically the word form that repWord form Meaning Tag Stem geben “to give” INF geb gibt “gives” 3SIE gib gab “gave” 1SIA gab gegeben “given” PP geb Table 1: Examples of German word-forms corresponding to the lemma geben. resents a set of related inflections in a dictionary. Unlike stemming, lemmatization must always produce an actual word form. In this paper, we present a discriminative string transduction approach to both stemming and lemmatization. Supervised stemmers require morphologically annotated corpora, which are expensive to build. We remove this constraint by extracting stems from semi-structured inflection tables, such as the one shown in Table 2, in an unsupervised manner. We design two transduction models that are trained on such stems, and evaluate them on unseen forms against a supervised model. We then extend our stemming models to perform the lemmatization task, and to incorporate an unannotated corpus. We evaluate them on several datasets.Our best system improves the state of the art for Dutch, German, and Spanish. Finally, we test our methods on the data from the shared task on morphological reinflection. This paper is organized as follows. In Section 2, we present an overview of prior work on inflectional simplification. In Section 3, we describe our stemming methodology, followed by three types of evaluation experiments in Section 4. In Section 5, we describe our approach to lemmatization, followed by both intrinsic and extrinsic experiments in Section 6. Section 7 concludes the paper. 1138 2 Related Work In this section, we review prior work on stemming and lemmatization. 2.1 Stemming and Segmentation Stemming is a sub-task of the larger problem of morphological segmentation. Because of the scarcity of morphologically-annotated data, many segmentation algorithms are unsupervised or rulebased. The Porter stemmer (Porter, 1980) and its derivatives, such as Snowball, apply hand-crafted context rules to strip affixes from a word. Creation of such rule-based programs requires significant effort and expert knowledge. We use structured inflection tables to create training data for a discriminative transducer. Morfessor (Creutz and Lagus, 2002) and Linguistica (Goldsmith, 2001) are unsupervised word segmenters, which divide words into regularly occurring sub-sequences by applying the minimum description length (MDL) principle. While these methods are good at identifying common morphemes, they make no distinction between stems and affixes, and thus cannot be used for stemming. Morfessor Categories-MAP (Creutz and Lagus, 2004; Creutz and Lagus, 2005) distinguishes between stems and affixes, but not between derivational and inflectional affixes. We adapt a more recent version (Gr¨onroos et al., 2014) to be used as an approximate stemmer. Poon et al. (2009) abandons the generative model of Morfessor for a log-linear model that predicts segmentations in sequence. The discriminative approach allows for the incorporation of several priors that minimize over-segmentation. Their unsupervised model outperforms Morfessor, and they are also able to report semi- and fullysupervised results. We also approach the problem using a discriminative method, but by aligning structured inflection tables, we can take advantage of linguistic knowledge, without requiring costly annotation. Ruokolainen et al. (2014) obtain further improvements by combining a structured perceptron CRF with letter successor variety (LSV), and the unsupervised features of Creutz and Lagus (2004). Their system is inherently supervised, while our stem annotations are derived in an unsupervised manner. Cotterell et al. (2015) introduce Chipmunk, a Singular Plural 1st 2nd 3rd 1st Present doy das da damos Imperfect daba dabas daba d´abamos Preterite di diste dio dimos Future dar´e dar´as dar´a daramos Table 2: A partial inflection table for the Spanish verb dar “to give”. fully-supervised system for labeled morphological segmentation. Extending the sequence-prediction models, Chipmunk makes use of data that is annotated not only for stem or affix, but also for inflectional role, effectively combining morphological segmentation and morphological analysis. While highly accurate, Chipmunk is limited in that it requires data that is fully-annotated for both segmentation and inflection. Our system has access to the morphological tags in inflection tables, but segmentation and tag alignment are performed in an unsupervised way. 2.2 Lemmatization Unlike stemmers, which can be unsupervised, lemmatizers typically require annotated training data. In addition, some lemmatizers assume access to the morphological tag of the word, and/or the surrounding words in the text. Our focus is on context-free lemmatization, which could later be combined with a contextual disambiguation module. Lemmatization is often part of the morphological analysis task, which aims at annotating each word-form with its lemma and morphological tag. Toutanova and Cherry (2009) learn a joint model for contextual lemmatization and part-of-speech prediction from a morphologically annotated lexicon. Their transduction model is tightly integrated with the POS information, which makes comparison difficult. However, in Section 6, we evaluate our approach against two other fully-supervised morphological analyzers: Morfette (Chrupała et al., 2008) and Lemming (M¨uller et al., 2015). Both of these systems perform lemmatization and morphological analysis in context, but can be trained to learn non-contextual models. Morfette requires morphological tags during training, while Lemming requires a morphological model constructed by its sister program, Marmot (M¨uller et al., 2013). 1139 3 Stemming Methods We approach stemming as a string transduction task. Stemming can be performed by inserting morpheme boundary markers between the stem and the affixes. For example, the German verb form gegeben is transduced into ge+geb+en, which induces the stem geb. 3.1 Character Alignment The training of a transduction model requires a set of aligned pairs of source and target strings. The alignment involves every input and output character; the insertion and deletion operations are disallowed. Atomic character transformations are then extracted from the alignments. We infer the alignment with a modified version of the M2M aligner of Jiampojamarn et al. (2007). The program applies the ExpectationMaximization algorithm with the objective to maximize the joint likelihood of its aligned source and target pairs. For our task, the source and target strings are nearly identical, except that the target includes stem-affix boundary markers. In order to account for every character in the target, which is usually longer than the source, we allow one-tomany alignment. This has the effect of tying the markers to the edge of a stem or affix. In order to encourage alignments between identical characters, we modify the aligner to generalize all identity transformations into a single match operation. 3.2 Supervised Transduction Once we have aligned the source and target pairs, we proceed to train a word-to-stem transduction model for stemming unseen test instances. The word-to-stem model learns where to insert boundary markers. We refer to a model that is trained on annotated morphological segmentations as our supervised method. We perform string transduction by adapting DIRECTL+, a tool originally designed for graphemeto-phoneme conversion (Jiampojamarn et al., 2010). DIRECTL+ is a feature-rich, discriminative character transducer that searches for a modeloptimal sequence of character transformation rules for its input. The core of the engine is a dynamic programming algorithm capable of transducing many consecutive characters in a single operation. Using a structured version of the MIRA algorithm (McDonald et al., 2005), training attempts to assign weights to each feature so that its STEM|INF geb|en setz|en tu|n STEM|1SIA gab|setz|te tat|STEM|2SIE gib|st setz|t tu|st PP|STEM|PP ge|geb|en ge|setz|t ge|ta|n Table 3: Stemming of the training data based on the patterns of regularity in inflectional tables. Stemmas are shown in bold. linear model separates the gold-standard derivation from all others in its search space. DIRECTL+ uses a number of feature templates to assess the quality of a rule: source context, target n-gram, and joint n-gram features. Context features conjoin the rule with indicators for all source character n-grams within a fixed window of where the rule is being applied. Target n-grams provide indicators on target character sequences, describing the shape of the target as it is being produced, and may also be conjoined with our source context features. Joint n-grams build indicators on rule sequences, combining source and target context, and memorizing frequently-used rule patterns. Following Toutanova and Cherry (2009), we modify the out-of-the-box version of DIRECTL+ by implementing an abstract copy feature that indicates when a rule simply copies its source characters into the target, e.g. b →b. The copy feature has the effect of biasing the transducer towards preserving the source characters during transduction. 3.3 Unsupervised Segmentation In order to train a fully-supervised model for stemming, large lists of morphologically-segmented words are generally required. While such annotated corpora are rare, semi-structured, crowdsourced inflection tables are available for many languages on websites such as Wiktionary (Table 2). In this section, we introduce an unsupervised method of inducing stems by leveraging paradigmatic regularity in inflection tables. Sets of inflection tables often exhibit the same inflectional patterns, called paradigms, which are based on phonological, semantic, or morphological criteria (cf. Table 3). Each table consists of lists of word forms, including the lemma. The number of distinct stems, such as ‘geb’ and ‘gib’ for the verb geben, is typically very small, averaging slightly over two per German verb inflection table. 1140 Source g i b t Target g i b +t Tags STEM 3SIE Joint g e b +3SIE Table 4: Alignment of the various representations of the word gibt. The number of distinct affix forms corresponding to the same inflectional form across different lemmas is also small, averaging below three for German verbs. For example, the second person singular indicative present suffix is always either -st, -est, or -t. We take advantage of this relative consistency to determine the boundaries between the stems and affixes of each word form in an unsupervised manner. We first associate each word form in the training data with an abstract tag sequence, which is typically composed of the STEM tag and a suffix tag representing a given inflection slot (Table 3). We then apply the unsupervised aligner to determine the most likely alignment between the character sequences and the tags, which are treated as indivisible units. The aligner simultaneously learns common representations for stems within a single inflection table, as well as common representations for each affix across multiple tables. Some inflections, such as the German past participle (PP in Table 3) involve a circumfix, which can be analyzed as a prefix-suffix combination. Prior to the alignment, we associate all forms that belong to the inflection slots involving circumfixation with tag sequences composed of three tags. Occasionally, a word form will only have a suffix where one would normally expect a circumfix (e.g. existiert). In order to facilitate tag alignment in such cases, we prepend a dummy null character to each surface word form. After the stem-affix boundaries have been identified, we proceed to train a word-to-stem transduction model as described in Section 3.2. We refer to this unsupervised approach as our basic method (cf. Figure 1). 3.4 Joint Stemming and Tagging The method described in the previous section fails to make use of a key piece of information in the inflection table: the lemma. The stem of an inflected form is typically either identical or very similar to the stem of its lemma, or stemma (Table 3). Our Words Noun Verb Adj English 50,155 2 5 3 Dutch 101,667 2 9 3 German 96,038 8 27 48 Table 5: The number of words and distinct inflections for each language in the CELEX datasets. joint method takes advantage of this similarity by transducing word-forms into stemmas with tags. The format of the training data for the word-tostemma model is different from the word-to-stem model. After the initial segmentation of the source word-forms into morphemes by the unsupervised aligner, as described in Section 3.3, the stems are replaced with the corresponding stemmas, and the affixes are replaced with the inflection tags. For example, the form gibt is paired with the sequence geb+3SIE, with the stem and stemma re-aligned at the character level as shown in Table 4. Unlike the basic method, which simply inserts morpheme breaks into word-forms, the joint method uses the tags to identify the boundaries between stems and affixes. At test time, the input word-form is transduced into a stemma and tag sequence. The character string that has generated the tag is then stripped from the input word-form to obtain the stem. By making use of both the tags and the stemma, the word-to-stemma model jointly optimizes the stem and affix combination. We refer to this unsupervised approach as our joint method. 4 Stemming Experiments Precise evaluation of stemming methods requires morphologically annotated lexicons, which are rare. Unlike lemmas, stems are abstract representations, rather than actual word forms. Unsurprisingly, annotators do not always agree on the segmentation of a word. In this section, we describe three experiments for evaluating stem extraction, intrinsic accuracy, and consistency. We evaluate our methods against three systems that are based on very different principles. Snowball1 is a rule-based program based on the methodology of the Porter Stemmer. Morfessor FlatCat (Gr¨onroos et al., 2014) performs unsupervised morphological segmentation, and approximates stemming by distinguishing stems and af1http://snowball.tartarus.org 1141 EN NL DE Our method 85.9 88.0 85.7 Snowball 48.2 58.8 49.5 Morfessor 61.4 71.4 61.4 Table 6: Unsupervised stemming accuracy of the CELEX training set. fixes.2 Chipmunk (Cotterell et al., 2015), is a fully-supervised system that represents the current state of the art. 4.1 Data We perform an evaluation of stemming on English (EN), Dutch (NL), and German (DE) lexicons from CELEX (Baayen et al., 1995). The three languages vary in terms of morphological complexity (Table 5). We use the morphological boundary annotations for testing all stemming systems, as well as for training our supervised system. For both unsupervised systems, we could build training sets from any inflection tables that contain unsegmented word-forms. However, in order to perform a precise comparison between the supervised and unsupervised systems, we extract the inflection tables from CELEX, disregarding the segmentation information. Each system is represented by a single stemming model that works on nouns, verbs, and adjectives. Due to differences in representation, the number of training instances vary slightly between models, but the number of words is constant (Table 5). In order to demonstrate that our unsupervised methods require no segmentation information, we create additional German training sets using the inflection tables extracted from Wiktionary by Durrett and DeNero (2013). The sets contain 18,912 noun forms and 43,929 verb forms. We derive separate models for verbs and nouns in order to compare the difficulty of stemming different parts of speech. The test sets for both CELEX and Wiktionary data come from CELEX, and consist of 5252, 6155, and 9817 unique forms for English, Dutch, and German, respectively. The German test set contains 2620 nouns, 3837 verbs, and 3360 adjectives. Chipmunk3 requires training data in which ev2Morfessor is applied to the union of the training and test data. 3http://cistern.cis.lmu.de/chipmunk EN NL DE Supervised 98.5 96.0 91.2 Basic 82.3 89.1 80.9 Joint 94.6 93.2 86.0 Snowball 50.0 58.4 48.2 Morfessor 65.2 60.9 51.8 Table 7: Stemming accuracy of systems trained and tested on CELEX datasets. ery morpheme of a word is annotated for morphological function. Since this information is not included in CELEX, we train and test Chipmunk, as well as a version of our supervised model, on the data created by Cotterell et al. (2015), which is much smaller. The English and German segmentation datasets contain 1161 and 1266 training instances, and 816 and 952 test instances, respectively. 4.2 Stem Extraction Evaluation First, we evaluate our unsupervised segmentation approach, which serves as the basis for our basic and joint models, on the union of the training and development parts of the CELEX dataset. We are interested how often the stems induced by the method described in Section 3.3 match the stem annotations in the CELEX database. The results are presented in Table 6. Our method is substantially more accurate than either Snowball or Morfessor. Snowball, despite being called a stemming algorithm, often eliminates derivational affixes; e.g. able in unbearable. Morfessor makes similar mistakes, although less often. Our method tends to prefer longer stems and shorter affixes. For example, it stems verwandtestem, as verwandte, while CELEX has verwandt. 4.3 Intrinsic Evaluation The results of the intrinsic evaluation of the stemming accuracy on unseen forms in Tables 7-9 demonstrate the quality of our three models. The joint model performs better than the basic model, and approaches the accuracy of the supervised model. On the CELEX data, our unsupervised joint model substantially outperforms Snowball and Morfessor on all three languages (Table 7).4 4The decrease in Morfessor accuracy between Tables 6 and 7 can be attributed to a different POS distribution between training and testing. 1142 Noun Verb Basic 76.8 90.3 Joint 85.2 91.1 Snowball 55.5 39.8 Morfessor 61.9 34.9 Table 8: German stemming accuracy of systems trained on Wiktionary data, and tested on the CELEX data. EN DE Supervised 94.7 85.1 Chipmunk 94.9 87.4 Table 9: Stemming accuracy of systems trained and tested on the Chipmunk data. These results are further confirmed on the German Wiktionary data (Table 8). Our supervised model performs almost as well as Chipmunk on its dataset (Table 9). A major advantage of the joint model over the basic model is its tag awareness (cf. Table 4). Although the tags are not always correctly recovered on the test data, they often allow the model to select the right analysis. For example, the basic model erroneously segments the German form erkl¨arte as erkl¨art+e because +e is a common verbal, adjectival and nominal suffix. The joint model, recognizing er as a verbal derivational prefix, predicts a verbal inflection tag (+1SIA), and the correct segmentation erkl¨ar+te. Verbal stems are unlikely to end in ¨art, and +te, unlike +e, can only be a verbal suffix. 4.4 Consistency Evaluation When stemming is used for inflectional simplification, it should ideally produce the same stem for all word-forms that correspond to a given lemma. In many cases, this is not an attainable goal because of internal stem changes (cf. Table 1). However, most inflected words follow regular paradigms, which involve no stem changes. For example, all forms of the Spanish verb cantar contain the substring cant, which is considered the common stem. We quantify the extent to which the various systems approximate this goal by calculating the average number of unique generated stems per inflection table in the CELEX test EN NL DE Gold 1.10 1.17 1.30 Supervised 1.13 1.64 1.50 Basic 1.06 1.21 1.25 Joint 1.09 1.08 1.20 Snowball 1.03 1.45 2.02 Morfessor 1.11 1.68 3.27 Table 10: Average number of stems per lemma. sets.5 The results are presented in Table 10. The stems-per-table average tends to reflect the morphological complexity of a language. All systems achieve excellent consistency on English, but the Dutch and German results paint a different picture. The supervised system falls somewhat short of emulating the gold segmentations, which may be due to the confusion between different parts of speech. In terms of consistency, the stems generated by our unsupervised methods are superior to those of Snowball and Morfessor, and even to the gold stems. We attribute this surprising result to the fact that the EM-based alignment of the training data favors consistency in both stems and affixes, although this may not always result in the correct segmentation. 5 Lemmatization Methods In this section, we present three supervised lemmatization methods, two of which incorporate the unsupervised stemming models described in Section 3. The different approaches are presented schematically in Figure 1, using the example of the German past participle gedacht. 5.1 Stem-based Lemmatization Our stem-based lemmatization method is an extension of our basic stemming method. We compose the word-to-stem transduction model from Section 3 with a stem-to-lemma model that converts stems into lemmas. The latter is trained on character-aligned pairs of stems and lemmas, where stems are extracted from the inflection tables via the unsupervised method described in Section 3.3. 5Chipmunk is excluded from the consistency evaluation because its dataset is not composed of complete inflection tables. 1143 Figure 1: Three lemmatization methods. 5.2 Stemma-based Lemmatization Our stemma-based lemmatization method is an extension of our joint stemming method. We compose the word-to-stemma transduction model described in Section 3.4 with a stemma-to-lemma model that converts stems into lemmas. The latter is trained on character-aligned pairs of stemmas and lemmas, where stemmas are extracted via the method described in Section 3.4. Typically, the model simply appends a lemmatic affix to the stemma, as all stem changes are handled by the word-to-stemma model. 5.3 Direct Lemmatization Our final lemmatization method is a word-tolemma transduction model that directly transforms word-forms into lemmas and tags. The model is trained on word-forms paired with their lemmas and inflectional tags, which are easily obtained from the inflection tables. A potential advantage of this method lies in removing the possibility of error propagation that is inherent in pipeline approaches. However, it involves a more complex transduction model that must simultaneously apply both stem changes, and transform inflectional affixes into lemmatic ones. 5.4 Re-ranking Intuitively, lemmatization accuracy could be improved by leveraging large, unannotated corpora. After generating n-best lists of possible lemmas, we re-rank them using the method of Joachims (2002) implemented with the Liblinear SVM tool (Fan et al., 2008). We employ four features of the prediction: 1. normalized score from DIRECTL+, 2. rank in the n-best list 3. presence in the corpus, 4. normalized likelihood from a 4-gram character language model derived from the corpus. 6 Lemmatization Experiments Unlike stemming, lemmatization is a completely consistent process: all word-forms within an inflection table correspond to the same lemma. In this section, we describe intrinsic and extrinsic experiments to evaluate the quality of the lemmas generated by our systems, and compare the results against the current state of the art. 6.1 Data As in our stemming experiments, we extract complete English, Dutch, and German inflection tables from CELEX. We use the same data splits as in Section 4.1. We also evaluate our methods on Spanish verb inflection tables extracted from Wiktionary by Durrett and DeNero (2013), using the original data splits. Spanish is a Romance language, with a rich verbal morphology comprising 57 inflections for each lemma. A different type of dataset comes from the CoNLL-2009 Shared Task (Hajiˇc et al., 2009). Unlike the CELEX and Wiktionary datasets, they are extracted from an annotated text, and thus contain few complete inflection tables, with many lemmas represented by a small number of wordforms. We extract all appropriate parts-of-speech from the test section of the corpus for English, German, and Spanish. This results in a test set of 5165 unique forms for English, 6572 for German, and 2668 for Spanish. For re-ranking, we make use of a word list constructed from the first one million lines of the appropriate Wikipedia dump.6 A character language model is constructed using the CMU Statistical Language Modeling Toolkit.7 20% of the development set is reserved for the purpose of training a re-ranking model. For Lemming and Morfette, we provide a lexicon generated from the corpus. Spanish marks unpredictable stress by marking a stressed vowel with an acute accent (e.g. cant´o 6All dumps are from November 2, 2015. 7http://www.speech.cs.cmu.edu 1144 Wiki CELEX CoNLL ES EN NL DE EN DE ES Stem-based 97.1 89.1 82.3 76.3 90.2 71.1 83.2 Stemma-based 94.5 96.4 85.2 85.8 92.5 75.9 91.2 Direct 98.8 96.4 89.5 88.7 92.5 80.1 91.5 Morfette 98.0 96.0 80.2 81.3 92.5 73.5 91.5 Lemming 98.6 96.7 86.6 88.2 92.5 77.9 90.4 Table 11: Lemmatization results without the use of a corpus. vs. canto). In order to facilitate generalization, we perform a lossless pre-processing step that replaces all accented vowels with their unaccented equivalent followed by a special stress symbol (e.g. canto’). For consistency, this modification is applied to the data for each system. 6.2 Intrinsic Evaluation We evaluate lemmatization using word accuracy. In cases where a surface word-form without a morphological tag may correspond to multiple lemmas, we judge the prediction as correct if it matches any of the lemmas. For example, both the noun Schrei and the verb schreien are considered to be correct lemmas for the German word schreien.8 The results without the use of a corpus are shown in Table 11. Thanks to its tag awareness, the stemma-based method is more accurate than the stem-based method, except on the verb-only Spanish Wiktionary dataset. However, our best method is the direct word-to-lemma model, which outperforms both Morfette and Lemming on most datasets. We interpret the results as the evidence for the effectiveness of our discriminative string transduction approach. The direct model is superior to the stemma-based model because it avoids any information loss that may occur during an intermediate stemming step. However, it is still able to take advantage of the tag that it generates together with the target lemma. For example, Lemming incorrectly lemmatizes the German noun form Verdienste “earnings” as verdien because +ste is a superlative adjective suffix. Our direct model, however, considers dien to be an unlikely ending for an adjective, and instead produces the correct lemma Verdienst. The results with the use of a corpus are shown 8The capitalization of German nouns is ignored. CELEX CoNLL NL DE DE ES Stem-based 82.3 76.9 71.9 90.6 Stemma-based 87.3 88.4 79.0 93.3 Direct 92.4 90.0 81.3 91.9 Lemming 86.9 88.5 77.9 90.6 Table 12: Lemmatization results boosted with a raw corpus. in Table 12. We omit the results on Spanish Wiktionary and on both English datasets, which are almost identical to those in Table 11. We observe that both the stemma-based and direct methods achieve a substantial error rate reduction on the Dutch and German datasets, while Lemming improvements are minimal.9 The Spanish CoNLL results are different: only the stem-based and stemma-based methods benefit noticeably from reranking. Error analysis indicates that the re-ranker is able to filter non-existent lemmas, such as wint for Winter, and endstadie for Endstadien, instead of Endstadium. In general, the degree of improvement seems to depend on the set of randomly selected instances in the held-out set used for training the re-ranker. If a base model achieves a very high accuracy on the held-out set, the re-ranker tends to avoid correcting the predictions on the test set. 6.3 Extrinsic Evaluation We perform our final evaluation experiment on the German dataset10 from the SIGMORPHON shared task on morphological reinflection (Cot9We were unable to obtain any corpus improvement with Morfette. 10http://sigmorphon.org/sharedtask 1145 Task 1 Task 3 Baseline 89.4 81.5 Chipmunk 82.0 88.3 Stem-based 86.9 89.3 Stemma-based 84.0 89.5 Lemma-based n/a 90.7 Source-Target 94.8 88.2 Table 13: Accuracy on the German dataset from the shared task on morphological reinflection. terell et al., 2016).11 The task of inflection generation (Task 1) is to produce a word-form given a lemma and an abstract inflectional tag. The task of unlabeled reinflection (Task 3) takes as input an unannotated inflected form instead of a lemma. We evaluate four different methods that combine the models introduced in this paper. For Task 1, the stem-based method composes a lemma-tostem and a stem-to-word models; the stemmabased method is similar, but pivots on stemmas instead; and the source-target method is a lemmato-word model. For Task 3, a word-to-lemma model is added in front of both the stem-based and stemma-based methods; the lemma-based method composes a word-to-lemma and a lemma-to-word models; and the source-target method is a wordto-word model. In addition, we compare with a method that is similar to our stem-based method, but pivots on Chipmunk-generated stems instead. As a baseline, we run the transduction method provided by the task organizers. The results are shown in Table 13. On Task 1, none of the stemming approaches is competitive with a direct lemma-to-word model. This is not surprising. First, the lemmatic suffixes provide information regarding part-of-speech. Second, the stemmers fail to take into account the fact that the source word-forms are lemmas. For example, the German word ¨uberhitzend “overheated” can either be an adjective, or the present participle of the verb ¨uberhitzen; if the word is a lemma, it is obviously the former. The lemma-based method is the best performing one on Task 3. One advantage that it has over the word-to-word model lies in the ability to reduce the potentially quadratic number of transduction operations between various related word11We use the development sets for this evaluation because the target sides of the test sets have not been publicly released. forms to a linear number of transduction operations between the word-forms and their lemmas, and vice-versa. 7 Conclusion We have presented novel methods that leverage readily available inflection tables to produce highquality stems and lemmas. In the future, we plan to expand our method to predict morphological analyses, as well as to incorporate other information such as parts-of-speech. Acknowledgments This research was supported by the Natural Sciences and Engineering Research Council of Canada, and the Alberta Innovates Technology Futures. References Harald R. Baayen, Richard Piepenbrock, and Leon Gulikers. 1995. The CELEX Lexical Database. Release 2 (CD-ROM). Linguistic Data Consortium, University of Pennsylvania, Philadelphia, Pennsylvania. Grzegorz Chrupała, Georgiana Dinu, and Josef Van Genabith. 2008. Learning morphology with Morfette. In LREC. Ryan Cotterell, Thomas M¨uller, Alexander Fraser, and Hinrich Sch¨utze. 2015. Labeled morphological segmentation with semi-markov models. CoNLL 2015, page 164. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task— morphological reinflection. In SIGMORPHON. Mathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL-02 workshop on Morphological and phonological learning-Volume 6, pages 21–30. Mathias Creutz and Krista Lagus. 2004. Induction of a simple morphology for highly-inflecting languages. In Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Current Themes in Computational Phonology and Morphology, pages 43–51. Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR05), volume 1(106-113), pages 51–59. 1146 Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In HLT-NAACL, pages 1185–1195. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874. John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational linguistics, 27(2):153–198. Stig-Arne Gr¨onroos, Sami Virpioja, Peter Smit, and Mikko Kurimo. 2014. Morfessor FlatCat: An HMM-based method for unsupervised and semisupervised learning of morphology. In COLING, pages 1177–1185. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, et al. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In CoNLL, pages 1–18. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In NAACL-HLT, pages 372–379. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2010. Integrating joint n-gram features into a discriminative training network. In NAACLHLT. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133– 142. ACM. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In ACL. Thomas M¨uller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient higher-order CRFs for morphological tagging. In EMNLP, pages 322–332. Thomas M¨uller, Ryan Cotterell, and Alexander Fraser. 2015. Joint lemmatization and morphological tagging with LEMMING. In EMNLP. Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In NAACL-HLT, pages 209– 217. Martin F Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. Teemu Ruokolainen, Oskar Kohonen, Sami Virpioja, and Mikko Kurimo. 2014. Painless semi-supervised morphological segmentation using conditional random fields. EACL, page 84. Kristina Toutanova and Colin Cherry. 2009. A global model for joint lemmatization and part-of-speech prediction. In ACL, pages 486–494. 1147
2016
108
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1148–1157, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Scaling a Natural Language Generation System Jonathan Pfeil Department of EECS Case Western Reserve University Cleveland, OH, USA [email protected] Soumya Ray Department of EECS Case Western Reserve University Cleveland, OH, USA [email protected] Abstract A key goal in natural language generation (NLG) is to enable fast generation even with large vocabularies, grammars and worlds. In this work, we build upon a recently proposed NLG system, Sentence Tree Realization with UCT (STRUCT). We describe four enhancements to this system: (i) pruning the grammar based on the world and the communicative goal, (ii) intelligently caching and pruning the combinatorial space of semantic bindings, (iii) reusing the lookahead search tree at different search depths, and (iv) learning and using a search control heuristic. We evaluate the resulting system on three datasets of increasing size and complexity, the largest of which has a vocabulary of about 10K words, a grammar of about 32K lexicalized trees and a world with about 11K entities and 23K relations between them. Our results show that the system has a median generation time of 8.5s and finds the best sentence on average within 25s. These results are based on a sequential, interpreted implementation and are significantly better than the state of the art for planningbased NLG systems. 1 Introduction and Related Work We consider the restricted natural language generation (NLG) problem (Reiter and Dale, 1997): given a grammar, lexicon, world and a communicative goal, output a valid sentence that satisfies this goal. Though restricted, this problem is still challenging when the NLG system has to deal with the large probabilistic grammars of natural language, large knowledge bases representing realistic worlds with many entities and relations between them, and complex communicative goals. Prior work has approach NLG from two directions. One strategy is over-generation and ranking, in which an intermediate structure generates many candidate sentences which are then ranked according to how well they match the goal. This includes systems built on chart parsers (Shieber, 1988; Kay, 1996; White and Baldridge, 2003), systems that use forest architectures such as HALogen/Nitrogen, (Langkilde-Geary, 2002), systems that use tree conditional random fields (Lu et al., 2009), and newer systems that use recurrent neural networks (Wen et al., 2015b; Wen et al., 2015a). Another strategy formalizes NLG as a goal-directed planning problem to be solved using an automated planner. This plan is then semantically enriched, followed by surface realization to turn it into natural language. This is often viewed as a pipeline generation process (Reiter and Dale, 1997). An alternative to pipeline generation is integrated generation, in which the sentence planning and surface realization tasks happen simultaneously (Reiter and Dale, 1997). CRISP (Koller and Stone, 2007) and PCRISP (Bauer and Koller, 2010) are two such systems. These generators encode semantic components and grammar actions in PDDL (Fox and Long, 2003), the input format for many off-the-shelf planners such as Graphplan (Blum and Furst, 1997). During the planning process a semantically annotated parse is generated alongside the sentence, preventing ungrammatical sentences and structures that cannot be realized. PCRISP builds upon the CRISP system by incorporating grammar probabilities as costs in an offthe-shelf metric planner (Bauer and Koller, 2010). Our work builds upon the Sentence Tree Realization with UCT (STRUCT) system (McKinley and Ray, 2014), described further in the next section. STRUCT performs integrated generation by for1148 malizing the generation problem as planning in a Markov decision process (MDP), and using a probabilistic planner to solve it. Results reported in previous work (McKinley and Ray, 2014) show that STRUCT is able to correctly generate sentences for a variety of communicative goals. Further, the system scaled better with grammar size (in terms of vocabulary) than CRISP. Nonetheless, these experiments were performed with toy grammars and worlds with artificial communicative goals written to test specific experimental variables in isolation. In this work, we consider the question: can we enable STRUCT to scale to realistic generation tasks? For example, we would like STRUCT to be able to generate any sentence from the Wall Street Journal (WSJ) corpus (Marcus et al., 1993). We describe four enhancements to the STRUCT system: (i) pruning the grammar based on the world and the communicative goal, (ii) intelligently caching and pruning the combinatorial space of semantic bindings, (iii) reusing the lookahead search tree at different search depths, and (iv) learning and using a search control heuristic. We call this enhanced version Scalable-STRUCT (S-STRUCT). In our experiments, we evaluate S-STRUCT on three datasets of increasing size and complexity derived from the WSJ corpus. Our results show that even with vocabularies, grammars and worlds containing tens of thousands of constituents, S-STRUCT has a median generation time of 8.5s and finds the best sentence on average within 25s, which is significantly better than the state of the art for planningbased NLG systems. 2 Background: LTAG and STRUCT STRUCT uses an MDP (Puterman, 1994) to formalize the NLG process. The states of the MDP are semantically-annotated partial sentences. The actions of the MDP are defined by the rules of the grammar. STRUCT uses a probabilistic lexicalized tree adjoining grammar (PLTAG). Tree Adjoining Grammars (TAGs) (Figure 1) consist of two sets of trees: initial trees and auxiliary (adjoining) trees. An initial tree can be applied to an existing sentence tree by replacing a leaf node whose label matches the initial tree’s root label in an action called “substitution”. Auxiliary trees have a special “foot” node whose label matches the label of its root, and uses this to encode recursive language structures. Given an exNP N cat cat(x) S NP VP V chased NP λx.λy.chased(x,y) N_r A black N_f λx.black(x) S NP N A black N cat VP V chased NP λy.(cat(x) ^ black(x) ^ chased(x,y)) Figure 1: LTAG examples: initial tree (chased), substitution (cat), and adjunction (black) isting sentence tree, an auxiliary tree can be applied in a three-step process called “adjunction”. First, an adjunction site is selected from the sentence tree; that is, any node whose label matches that of the auxiliary tree’s root and foot. Then, the subtree rooted by the adjunction site is removed from the sentence tree and substituted into the foot node of the auxiliary tree. Finally, the modified auxiliary tree is substituted back into the original adjunction location. LTAG is a variation of TAG in which each tree is associated with a lexical item known as an anchor (Joshi and Schabes, 1997). Semantics can be added to an LTAG by annotating each tree with compositional lambda semantics that are unified via β-reduction (Jurafsky and Martin, 2000). A PLTAG associates probabilities with every tree in the LTAG and includes probabilities for starting a derivation, probabilities for substituting into a specific node, and probabilities for adjoining at a node, or not adjoining. The STRUCT reward function is a measure of progress towards the communicative goal as measured by the overlap with the semantics of a partial sentence. It gives positive reward to subgoals fulfilled and gives negative reward for unbound entities, unmet semantic constraints, sentence length, and ambiguous entities. Therefore, the best sentence for a given goal is the shortest unambiguous sentence which fulfills the communicative goal and all semantic constraints. The transition function of the STRUCT MDP assigns the total probability of selecting and applying an action in a state to transition to the next, given by the action’s probability in the grammar. The final component of the MDP is the discount factor, which is set to 1. This is because with lexicalized actions, the state does not loop, and the algorithm may need to generate long sentences to match the communicative goal. STRUCT uses a modified version of the probabilistic planner UCT (Kocsis and Szepesv´ari, 2006), which can generate near-optimal plans 1149 with a time complexity independent of the state space size. UCT’s online planning happens in two steps: for each action available, a lookahead search tree is constructed to estimate the action’s utility. Then, the best available action is taken and the procedure is repeated. If there are any unexplored actions, UCT will choose one according to an “open action policy” which samples PLTAGs without replacement. If no unexplored actions remain, an action a is chosen in state s according to the “tree policy” which maximizes Equation 1. P(s, a) = Q(s, a) + c s lnN(s) N(s, a) (1) Here Q(s, a) is the estimated value of a, computed as the sum of expected future rewards after (s, a). N(s, a) and N(s) are the visit counts for s and (s, a) respectively. c is a constant term controlling the exploration/exploitation trade off. After an action is chosen, the policy is rolled out to depth D by repeatedly sampling actions from the PLTAG, thereby creating the lookahead tree. UCT was originally used in an adversarial environment, so it selects actions leading to the best average reward; however, language generation is not adversarial, so STRUCT chooses actions leading to the best overall reward instead. Algorithm 1 S-STRUCT Algorithm Require: Grammar R, World W, Goal G, num trials N, lookahead depth D, timeout T 1: †R ←pruneGrammar(R) 2: state ←empty state 3: uctTree ←new search tree at state 4: while state not terminal and time < T do 5: †uctTree ←getAction(uctTree, N, D) 6: state ←uctTree.state 7: end while 8: return extractBestSentence(uctTree) The modified STRUCT algorithm presented in this paper, which we call Scalable-STRUCT (S-STRUCT), is shown in Algorithm 1. If the changes described in the next section (lines marked with †) are removed, we recover the original STRUCT system. 3 Scaling the STRUCT system In this section, we describe five enhancements to STRUCT that will allow it to scale to real world Algorithm 2 getAction (Algorithm 1, line 5) Require: Search Tree uctTree, num trials N, lookahead depth D, grammar R 1: for N do 2: node ←uctTree 3: if node.state has unexplored actions then 4: †action ←pick with open action policy 5: else 6: †action ←pick with tree policy 7: end if 8: †node ←applyAction(node, action) 9: depth ←1 10: while depth < D do 11: action ←sample PLTAG from R 12: †node ←applyAction(node, action) 13: reward ←calcReward(node.state) 14: propagate reward up uctTree 15: depth ←depth + 1 16: end while 17: end for 18: uctTree ←best child of uctTree 19: return uctTree NLG tasks. Although the implementation details of these are specific to STRUCT, all but one (reuse of the UCT search tree) could theoretically be applied to any planning-based NLG system. 3.1 Grammar Pruning It is clear that for a given communicative goal, only a small percentage of the lexicalized trees in the grammar will be helpful in generating a sentence. Since these trees correspond to actions, if we prune the grammar suitably, we reduce the number of actions our planner has to consider. Algorithm 3 pruneGrammar (Algorithm 1, line 1) Require: Grammar R, World W, Goal G 1: G′ ←∅ 2: for e ∈G.entities do 3: G′ ←G′ ∪referringExpression(e, W) 4: end for 5: R′ ←∅ 6: for tree ∈R do 7: if tree fulfills semantic constraints or tree.relations ⊆G′.relations then 8: R′ ←R′ ∪{tree} 9: end if 10: end for 11: return R′ There are four cases in which an action is rele1150 vant. First, the action could directly contribute to the goal semantics. Second, the action could satisfy a semantic constraint, such as mandatory determiner adjunction which would turn “cat” into “the cat” in Figure 1. Third, the action allows for additional beneficial actions later in the generation. An auxiliary tree anchored by “that”, which introduces a relative clause, would not add any semantic content itself. However, it would add substitution locations that would let us go from “the cat” to “the cat that chased the rabbit” later in the generation process. Finally, the action could disambiguate entities in the communicative goal. In the most conservative approach, we cannot discard actions that introduce a relation sharing an entity with a goal entity (through any number of other relations), as it may be used in a referring expression (Jurafsky and Martin, 2000). However, we can optimize this by ensuring that we can find at least one, instead of all, referring expressions. This grammar pruning is “lossless” in that, after pruning, the full communicative goal can still be reached, all semantic constraints can be met, and all entities can be disambiguated. However it is possible that the solution found will be longer than necessary. This can happen if we use two separate descriptors to disambiguate two entities where one would have sufficed. For example, we could generate the sentence “the black dog chased the red cat” where saying “the large dog chased the cat” would have sufficed (if “black”, “red”, and “large” were only included for disambiguation purposes). We implement the pruning logic in the pruneGrammar algorithm shown in Algorithm 3. First, an expanded goal G′ is constructed by explicitly solving for a referring expression for each goal entity and adding it to the original goal. The algorithm is based on prior work (Bohnet and Dale, 2005) and uses an alternating greedy search, which chooses the relation that eliminates the most distractors, and a depth-first search to describe the entities. Then, we loop through the trees in the grammar and only keep those that can fulfill semantic constraints or can contribute to the goal. This includes trees introducing relative clauses. 3.2 Handling Semantic Bindings As a part of the reward calculation in Algorithm 4, we must generate the valid bindings between the entities in the partial sentence and the entities in the world (line 2). We must have at least one Algorithm 4 calcReward (Algorithm 2, line 13) Require: Partial Sentence S, World W, Goal G 1: score ←0 2: †B ←getV alidBindings(S, W) 3: if |B| > 0 then 4: †m ←getV alidBinding(S, G) 5: S ←apply m to S 6: score += C1 |G.relations ∩S.relations| 7: score −= C2 |G.conds −S.conds| 8: score −= C3 |G.entities ⊖S.entities| 9: score −= C4 |S.sentence| 10: score /= C5 |B| 11: end if 12: return score valid binding, as this indicates that our partial sentence is factual (with respect to the world); however, more than one binding means that the sentence is ambiguous, so a penalty is applied. Unfortunately, computing the valid bindings is a combinatorial problem. If there are N world entities and K partial sentence entities, there are N K  bindings between them that we must check for validity. This quickly becomes infeasible as the world size grows. Algorithm 5 getValidBindings (Alg. 4, line 2) Require: Partial Sentence S, World W 1: validBindings ←∅ 2: queue ←prevBindings if exists else [∅] 3: while |queue| > 0 do 4: b ←queue.pop() 5: S′ ←apply binding b to S 6: if S′, W consistent and S′.entities all bound then 7: validBindings.append(b) 8: else if S′, W consistent then 9: freeS ←unbound S′.entities 10: freeW ←W.entities not in b 11: for es, ew ∈freeS × freeW do 12: queue.push(b ∪{es →ew}) 13: end for 14: end if 15: end while 16: return validBindings Instead of trying every binding, we use the procedure shown in Algorithm 5 to greatly reduce the number of bindings we must check. Starting with an initially empty binding, we repeatedly add a single {sentenceEntity →worldEntity} pair (line 12). If a binding contains all partial sentence 1151 entities and the semantics are consistent with the world, the binding is valid (lines 6-7). If at any point, a binding yields partial sentence semantics that are inconsistent with the world, we no longer need to consider any bindings which it is a subset of (when condition on line 8 is false, no children expanded). The benefit of this bottom-up approach is that when an inconsistency is caused by adding a mapping of partial sentence entity e1 and world entity e2, all of the N−1 K−1  bindings containing {e1 →e2} are ruled out as well. This procedure is especially effective in worlds/goals with low ambiguity (such as real-world text). We further note that many of the binding checks are repeated between action selections. Because our sentence semantics are conjunctive, entity specifications only get more specific with additional relations; therefore, bindings that were invalidated earlier in the search procedure can never again become valid. Thus, we can cache and reuse valid bindings from the previous partial sentence (line 2). For domains with very large worlds (where most relations have no bearing on the communicative goal), most of the possible bindings will be ruled out with the first few action applications, resulting in large computational savings. 3.3 Reusing the Search Tree The STRUCT algorithm constructs a lookahead tree of depth D via policy rollout to estimate the value of each action. This tree is then discarded and the procedure repeated at the next state. But it may be that at the next state, many of the useful actions will already have been visited by prior iterations of the algorithm. For a lookahead depth D, some actions will have already been explored up to depth D −1. For example if we have generated the partial sentence “the cat chased the rabbit” and SSTRUCT looks ahead to find that a greater reward is possible by introducing the relative clause “the rabbit that ate”, when we transition to “the rabbit that”, we do not need to re-explore “ate” and can directly try actions that result in “that ate grass”, “that ate carrots”, etc. Note that if there are still unexplored actions at an earlier depth, these will still be explored as well (action rollouts such as “that drank water” in this example). Reusing the search tree is especially effective given that the tree policy causes us to favor areas of the search space with high value. Therefore, when we transition to the state with highest value, it is likely that many useful actions have already been explored. Reusing the search tree is reflected in Algorithms 1-2 by passing uctTree back and forth to/from getAction instead of starting a new search tree at each step. In applyAction, when a state/action already in the tree is chosen, SSTRUCT transitions to the next state without having to recompute the state or its reward. 3.4 Learning and Using Search Control During the search procedure, a large number of actions are explored but relatively few of them are helpful. Ideally, we would know which actions would lead to valuable states without actually having to expand and evaluate the resultant states, which is an expensive operation. From prior knowledge, we know that if we have a partial sentence of “the sky is”, we should try actions resulting in “the sky is blue” before those resulting in “the sky is yellow”. This prior knowledge can be estimated through learned heuristics from previous runs of the planner (Yoon et al., 2008). To do this, a set of previously completed plans can be treated as a training set: for each (state, action) pair considered, a feature vector Φ(s, a) is emitted, along with either the distance to the goal state or a binary indicator of whether or not the state is on the path to the goal. A perceptron (or similar model) H(s, a) is trained on the (Φ(s, a), target) pairs. H(s, a) can be incorporated into the planning process to help guide future searches. We apply this idea to our S-STRUCT system by tracking the (state, action) pairs visited in previous runs of the STRUCT system where STRUCT obtained at least 90% of the reward of the known best sentence and emit a feature vector for each, containing: global tree frequency, tree probability (as defined in Section 4.1), and the word correlation of the action’s anchor with the two words on either side of the action location. We define the global tree frequency as the number of times the tree appeared in the corpus normalized by the number of trees in the corpus; this is different than the tree probability as it does not take any context into account (such as the parent tree and substitution location). Upon search completion, the feature vectors are annotated with a binary indicator label of whether or not the (state, action) pair was on the path to the best sentence. This training set is then used to train a perceptron H(s, a). 1152 Table 1: Summary statistics for test data sets Test Set Goals / Sentences Vocab Size Lex Trees / Actions World Entities World Relations Avg. Goal Entities Avg. Goal Relations Max Depth Small 50 130 395 77 135 1.54 2.70 0 Medium 500 1165 3734 741 1418 1.48 2.83 1 Large 5000 9872 31966 10998 23097 2.20 4.62 6 We use H(s, a) to inform both the open action policy (Algorithm 2, line 4) and the tree policy (Algorithm 2, line 6). In the open action policy , we choose open actions according to their heuristic values, instead of just their tree probabilities. In the tree policy, we incorporate H(s, a) into the reward estimation by using Equation 2 in place of Equation 1 in Algorithm 2 (Chaslot et al., 2008a): P(s, a) = Q(s, a)+λH(s, a)+c s lnN(s) N(s, a). (2) Here, H(s, a) is a value prediction from prior knowledge and λ is a parameter controlling the trade-off between prior knowledge and estimated value on this goal. 4 Empirical Evaluation In this section, we evaluate three hypotheses: (1) S-STRUCT can handle real-world datasets, as they scale in terms of (a) grammar size, (b) world size, (c) entities/relations in the goal, (d) lookahead required to generate sentences, (2) S-STRUCT scales better than STRUCT to such datasets and (3) Each of the enhancements above provides a positive contribution to STRUCT’s scalability in isolation. 4.1 Datasets We collected data in the form of grammars, worlds and goals for our experiments, starting from the WSJ corpus of the Penn TreeBank (Marcus et al., 1993). We parsed this with an LTAG parser to generate the best parse and derivation tree (Sarkar, 2000; XTAG Research Group, 2001). The parser generated valid parses for 18,159 of the WSJ sentences. To pick the best parse for a given sentence, we choose the parse which minimizes the PARSEVAL bracket-crossing metric against the goldstandard (Abney et al., 1991). This ensures that the major structures of the parse tree are retained. We then pick the 31 most frequently occurring XTAG trees (giving us 74% coverage of the parsed sentences) and annotate them with compositional semantics. The final result of this process was a corpus of semantically annotated WSJ sentences along with their parse and derivation trees 1. To show the scalability of the improved STRUCT system, we extracted 3 datasets of increasing size and complexity from the semantically annotated WSJ corpus. We nominally refer to these datasets as Small, Medium, and Large. Summary statistics of the data sets are shown in Table 1. For each test set, we take the grammar to be all possible lexicalizations of the unlexicalized trees given the anchors of the test set. We set the world as the union of all communicative goals in the test set. The PLTAG probabilities are derived from the entire parseable portion of the WSJ. Due to the data sparsity issues (Bauer and Koller, 2010), we use unlexicalized probabilities. The reward function constants C were set to [500, 100, 10, 10, 1]. In the tree policy, c was set to 0.5. These are as in the original STRUCT system. λ was chosen as 100 after evaluating {0, 10, 100, 1000, 10000} on a tuning set. In addition to test sets, we extract an independent training set using 100 goals to learn the heuristic H(s, a). We train a separate perceptron for each test set and incorporate this into the SSTRUCT algorithm as described in Section 3.4. 4.2 Results For these experiments, S-STRUCT was implemented in Python 3.4. The experiments were run on a single core of a Intel(R) Xeon(R) CPU E52450 v2 processor clocked at 2.50GHz with access to 8GB of RAM. The times reported are from the start of the generation process instead of the start of the program execution to reduce variation caused by interpreter startup, input parsing, etc. In 1Not all of the covered trees were able to recursively derive their semantics, despite every constituent tree being semantically annotated. This is because β-reduction of the λsemantics is not associative in many cases where the syntactic composition is associative, causing errors during semantic unification. Due to this and other issues, the number of usable parse trees/sentences was about 7500. 1153 0 10 20 30 40 50 0.0 0.2 0.4 0.6 0.8 1.0 1.2 +Prune Grammar Baseline (a) Small Baseline 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 +Heuristic (S-STRUCT) +Cache Bindings +Reuse Search Tree +Search Bindings (b) Small 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 (c) Medium 0 10 20 30 40 50 60 0.0 0.2 0.4 0.6 0.8 1.0 1.2 (d) Large Figure 2: Avg. Best Normalized Reward (y-axis) vs. Time in Seconds (x-axis) for (a) Small Baseline, (b) Small, (c) Medium, (d) Large. Time when first grammatical sentence available marked as ×. Experiments are cumulative (a trial contains all improvements below it in the legend). S NP D the NP N provision VP V eliminated NP N losses S NP D the NP N A one-time N provision VP V eliminated NP N A future N losses S NP D the NP N A one-time N provision VP V eliminated NP NP N A future N losses PP P at NP D the NP N unit Figure 3: Best sentence available during S-STRUCT generation at 5.5 (s), 18.0 (s), and 28.2 (s) all experiments, we normalize the reward of a sentence by the reward of the actual parse tree, which we take to be the gold standard. Note that this means that in some cases, S-STRUCT can produce solutions with better than this value, e.g. if there are multiple ways to achieve the semantic goal. To investigate the first two hypotheses that S-STRUCT can handle the scale of real-world datasets and scales better than STRUCT, we plot the average best reward of all goals in the test set over time in Figure 2. The results show the cumulative effect of the enhancements; working up through the legend, each line represents “switching on” another option and includes the effects of all improvements listed below it. The addition of the heuristic represents the entire S-STRUCT system. On each line, × marks the time at which the first grammatically correct sentence was available. The Baseline shown in Figure 2a is the original STRUCT system proposed in (McKinley and Ray, 2014). Due to the large number of actions that must be considered, the Baseline experiment’s average first sentence is not available until 26.20 seconds, even on the Small dataset. In previous work, the experiments for both STRUCT and CRISP were on toy examples, with grammars having 6 unlexicalized trees and typically < 100 lexicalized trees (McKinley and Ray, 2014; Koller and Stone, 2007). In these experiments, STRUCT was shown to perform better than or as well as CRISP. Even in our smallest domain, however, the baseline STRUCT system is impractically slow. Further, prior work on PCRISP used a grammar that was extracted from the WSJ Penn TreeBank, however it was restricted to the 416 sentences in Section 0 with <16 words. With PCRISP’s extracted grammar, the most successful realization experiment yielded a sentence in only 62% of the trials, the remainder having timed out after five minutes (Bauer and Koller, 2010). Thus it is clear that these systems do not scale to real NLG tasks. Adding the grammar pruning to the Baseline allows S-STRUCT to find the first grammatically correct sentence in 1.3 seconds, even if the reward is still sub-optimal. For data sets larger than Small, the Baseline and Prune Grammar experiments could not be completed, as they still enumerated all semantic bindings. For even the medium world, a sentence with 4 entities would have to consider 1.2 × 1010 bindings. Therefore, the cumulative experiments start with Prune Grammar and Search Bindings turned on. Figures 2b, 2c and 2d show the results for each enhancement above on the corresponding dataset. We observe that the improved binding search further improves performance on the Small task. The Small test set does not require any lookahead, so it is expected that there would be no benefit to 1154 (a) 0 10 20 30 40 50 60 Time (seconds) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Reward Heuristic Cache Bindings Reuse Search Tree Search Bindings (b) S_r NP_0 VP V ε_v AP_1 A red λx.red(x) (c) 0 20 40 60 80 100 120 140 Time (seconds) 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 (d) Figure 4: (a) Large Non-Cumulative Experiment (b) αnx0Ax1 XTAG tree (c) Time to 90% Reward (d) Lookahead Required. reusing the search tree, and little to no benefit from caching bindings or using a heuristic. In the Small domain, S-STRUCT is able to generate sentences very quickly; the first sentence is available by 44ms and the best sentence is available by 100ms. In the medium and large domains, the “Reuse Search Tree”, “Cache Bindings”, and “Heuristic” changes do improve upon the use of only “Search Bindings”. The Medium domain is still extremely fast, with the first sentence available in 344ms and the best sentence available around 1s. The large domain slows down due to the larger lookahead required, the larger grammar, and the huge number of bindings that have to be considered. Even with this, S-STRUCT can generate a first sentence in 7.5s and the best sentence in 25s. In Figure 4c, we show a histogram of the generation time to 90% of the best reward. The median time is 8.55s (• symbol). Additionally, histograms of the lookahead required for guaranteed optimal generation are shown for the entire parsable WSJ and our Large world in Figure 4d. The complexity of the entire WSJ does not exceed our Large world, thus we argue that our results are representative of SSTRUCT’s performance on real-world tasks. To investigate the third hypothesis that each improvement contributes positively to the scalability, the noncumulative impact of each improvement is shown in Figure 4a. All experiments still must have Prune Grammar and Search Bindings turned on in order to terminate. Therefore, we take this as a baseline to show that the other changes provide additional benefits. Looking at Figure 4a, we see that each of the changes improves the reward curve and the time to generate the first sentence. 4.3 Discussion, Limitations and Future Work As an example of sentences available at a given time in the process, we annotate the Large Cumulative Heuristic Experiment with ♦symbols for a specific trial of the Large dataset. Figure 3 shows the best sentence that was available at three different times. The first grammatically correct sentence was available 5.5 seconds into the generation process, reading “The provision eliminated losses”. This sentence captured the major idea of the communicative goal, but missed some critical details. As the search procedure continued, S-STRUCT explored adjunction actions. By 18 seconds, additional semantic content was added to expand upon 1155 the details of the provision and losses. S-STRUCT settled on the best sentence it could find at 28.2 seconds, able to match the entire communicative goal with the sentence “The one-time provision eliminated future losses at the unit”. In domains with large lookaheads required, reusing the Search Tree has a large effect on both the best reward at a given time and on the time to generate the first sentence. This is because SSTRUCT has already explored some actions from depth 1 to D −1. Additionally, in domains with a large world, the Cache Binding improvement is significant. The learned heuristic, which achieves the best reward and the shortest time to a complete sentence, tries to make S-STRUCT choose better actions at each step instead of allowing STRUCT to explore actions faster; this means that there is less overlap between the improvement of the heuristic and other strategies, allowing the total improvement to be higher. One strength of the heuristic is in helping SSTRUCT to avoid “bad” actions. For example, the XTAG tree αnx0Ax1 shown in Figure 4b is an initial tree lexicalized by an adjective. This tree would be used to say something like “The dog is red.” S-STRUCT may choose this as an initial action to fulfill a subgoal; however, if the goal was to say that a red dog chased a cat, S-STRUCT will be shoehorned into a substantially worse goal down the line, when it can no longer use an initial tree that adds the “chase” semantics. Although the rollout process helps, some sentences can share the same reward up to the lookahead and only diverge later. The heuristic can help by biasing the search against such troublesome scenarios. All of the results discussed above are without parallelization and other engineering optimizations (such as writing S-STRUCT in C), as it would make for an unfair comparison with the original system. The core UCT procedure used by STRUCT and S-STRUCT could easily be parallelized, as the sampling shown in Algorithm 2 can be done independently. This has been done in other domains in which UCT is used (Computer Go), to achieve a speedup factor of 14.9 using 16 processor threads (Chaslot et al., 2008b). Therefore, we believe these optimizations would result in a constant factor speedup. Currently, the STRUCT and S-STRUCT systems only focuses on the domain of single sentence generation, rather than discourse-level planning. Additionally, neither system handles nonsemantic feature unification, such as constraints on number, tense, or gender. While these represent practical concerns for a production system, we argue that their presence will not affect the system’s scalability, as there is already feature unification happening in the λ-semantics. In fact, we believe that additional features could improve the scalability, as many available actions will be ruled out at each state. 5 Conclusion In this paper we have presented S-STRUCT, which enhances the STRUCT system to enable better scaling to real generation tasks. We show via experiments that this system can scale to large worlds and generate complete sentences in realworld datasets with a median time of 8.5s. To our knowledge, these results and the scale of these NLG experiments (in terms of grammar size, world size, and lookahead complexity) represents the state-of-the-art for planning-based NLG systems. We conjecture that the parallelization of SSTRUCT could achieve the response times necessary for real-time applications such as dialog. S-STRUCT is available through Github upon request. References S. Abney, S. Flickenger, C. Gdaniec, C. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. Procedure for quantitatively comparing the syntactic coverage of english grammars. In E. Black, editor, Proceedings of the Workshop on Speech and Natural Language, HLT ’91, pages 306–311, Stroudsburg, PA, USA. Association for Computational Linguistics. D. Bauer and A. Koller. 2010. Sentence generation as planning with probabilistic LTAG. Proceedings of the 10th International Workshop on Tree Adjoining Grammar and Related Formalisms, New Haven, CT. A.L. Blum and M.L. Furst. 1997. Fast planning through planning graph analysis. Artificial intelligence, 90(1):281–300. Bernd Bohnet and Robert Dale. 2005. Viewing referring expression generation as search. In International Joint Conference on Artificial Intelligence, pages 1004–1009. Guillaume M. JB Chaslot, Mark H.M. Winands, H. Jaap van Den Herik, Jos W.H.M. Uiterwijk, and Bruno Bouzy. 2008a. Progressive strategies for 1156 monte-carlo tree search. New Mathematics and Natural Computation, 4(03):343–357. Guillaume M. JB Chaslot, Mark H.M. Winands, and H Jaap van Den Herik. 2008b. Parallel monte-carlo tree search. In Computers and Games, pages 60–71. Springer. M. Fox and D. Long. 2003. PDDL2.1: An extension to PDDL for expressing temporal planning domains. Journal of Artificial Intelligence Research, 20:61– 124. Aravind K Joshi and Yves Schabes. 1997. Treeadjoining grammars. In Handbook of formal languages, pages 69–123. Springer. Daniel Jurafsky and James H. Martin. 2000. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1st edition. Martin Kay. 1996. Chart generation. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, ACL ’96, pages 200– 204, Stroudsburg, PA, USA. Association for Computational Linguistics. Levente Kocsis and Csaba Szepesv´ari. 2006. Bandit based monte-carlo planning. In Proceedings of the 17th European Conference on Machine Learning, ECML’06, pages 282–293, Berlin, Heidelberg. Springer-Verlag. Alexander Koller and Matthew Stone. 2007. Sentence generation as a planning problem. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 336–343, Prague, Czech Republic, June. Association for Computational Linguistics. I. Langkilde-Geary. 2002. An empirical verification of coverage and correctness for a general-purpose sentence generator. In Proceedings of the 12th International Natural Language Generation Workshop, pages 17–24. Citeseer. W. Lu, H.T. Ng, and W.S. Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1, pages 400–409. Association for Computational Linguistics. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The Penn Treebank. Computational linguistics, 19(2):313–330. Nathan McKinley and Soumya Ray. 2014. A decisiontheoretic approach to natural language generation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 552–561, Baltimore, Maryland, June. Association for Computational Linguistics. M.L. Puterman. 1994. Markov decision processes: Discrete stochastic dynamic programming. John Wiley & Sons, Inc. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87. Anoop Sarkar. 2000. Practical experiments in parsing using tree adjoining grammars. In Proceedings of TAG, volume 5, pages 25–27. Stuart M. Shieber. 1988. A uniform architecture for parsing and generation. In Proceedings of the 12th conference on Computational linguistics - Volume 2, COLING ’88, pages 614–619, Stroudsburg, PA, USA. Association for Computational Linguistics. Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 275–284, Prague, Czech Republic, September. Association for Computational Linguistics. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015b. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721, Lisbon, Portugal, September. Association for Computational Linguistics. M. White and J. Baldridge. 2003. Adapting chart realization to CCG. In Proceedings of the 9th European Workshop on Natural Language Generation, pages 119–126. XTAG Research Group. 2001. A lexicalized tree adjoining grammar for english. Technical Report IRCS-01-03, IRCS, University of Pennsylvania. Sungwook Yoon, Alan Fern, and Robert Givan. 2008. Learning control knowledge for forward search planning. The Journal of Machine Learning Research, 9:683–718. 1157
2016
109
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 108–117, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Incremental Acquisition of Verb Hypothesis Space towards Physical World Interaction Lanbo She and Joyce Y. Chai Department of Computer Science and Engineering Michigan State University East Lansing, Michigan 48824, USA {shelanbo, jchai}@cse.msu.edu Abstract As a new generation of cognitive robots start to enter our lives, it is important to enable robots to follow human commands and to learn new actions from human language instructions. To address this issue, this paper presents an approach that explicitly represents verb semantics through hypothesis spaces of fluents and automatically acquires these hypothesis spaces by interacting with humans. The learned hypothesis spaces can be used to automatically plan for lower-level primitive actions towards physical world interaction. Our empirical results have shown that the representation of a hypothesis space of fluents, combined with the learned hypothesis selection algorithm, outperforms a previous baseline. In addition, our approach applies incremental learning, which can contribute to life-long learning from humans in the future. 1 Introduction As a new generation of cognitive robots start to enter our lives, it is important to enable robots to follow human commands (Tellex et al., 2014; Thomason et al., 2015) and to learn new actions from human language instructions (Cantrell et al., 2012; Mohan et al., 2013). To achieve such a capability, one of the fundamental challenges is to link higher-level concepts expressed by human language to lower-level primitive actions the robot is familiar with. While grounding language to perception (Gorniak and Roy, 2007; Chen and Mooney, 2011; Kim and Mooney, 2012; Artzi and Zettlemoyer, 2013; Tellex et al., 2014; Liu et al., 2014; Liu and Chai, 2015) has received much attention in recent years, less work has addressed grounding language to robotic action. Actions are often expressed by verbs or verb phrases. Most semantic representations for verbs are based on argument frames (e.g., thematic roles which capture participants of an action). For example, suppose a human directs a robot to “fill the cup with milk”. The robot will need to first create a semantic representation for the verb “fill” where “the cup” and “milk” are grounded to the respective objects in the environment (Yang et al., 2016). Suppose the robot is successful in this first step, it still may not be able to execute the action “fill” as it does not know how this higher-level action corresponds to its lower-level primitive actions. In robotic systems, operations usually consist of multiple segments of lower-level primitive actions (e.g., move to, open gripper, and close gripper) which are executed both sequentially and concurrently. Task scheduling provides the order or schedule for executions of different segments of actions and action planning provides the plan for executing each individual segment. Primitive actions are often predefined in terms of how they change the state of the physical world. Given a goal, task scheduling and action planning will derive a sequence of primitive actions that can change the initial environment to the goal state. The goal state of the physical world becomes a driving force for robot actions. Thus, beyond semantic frames, modeling verb semantics through their effects on the state of the world may provide a link to connect higher-level language and lowerlevel primitive actions. Motivated by this perspective, we have developed an approach where each verb is explicitly represented by a hypothesis space of fluents (i.e., desired goal states) of the physical world, which is incrementally acquired and updated through interacting with humans. More specifically, given a human command, if there is no knowledge about the 108 corresponding verb (i.e., no existing hypothesis space for that verb), the robot will initiate a learning process by asking human partners to demonstrate the sequence of actions that is necessary to accomplish this command. Based on this demonstration, a hypothesis space of fluents for that verb frame will be automatically acquired. If there is an existing hypothesis space for the verb, the robot will select the best hypothesis that is most relevant to the current situation and plan for the sequence of lower-level actions. Based on the outcome of the actions (e.g., whether it has successfully executed the command), the corresponding hypothesis space will be updated. Through this fashion, a hypothesis space for each encountered verb frame is incrementally acquired and updated through continuous interactions with human partners. In this paper, to focus our effort on representations and learning algorithms, we adopted an existing benchmark dataset (Misra et al., 2015) to simulate the incremental learning process and interaction with humans. Compared to previous works (She et al., 2014b; Misra et al., 2015), our approach has three unique characteristics. First, rather than a single goal state associated with a verb, our approach captures a space of hypotheses which can potentially account for a wider range of novel situations when the verb is applied. Second, given a new situation, our approach can automatically identify the best hypothesis that fits the current situation and plan for lower-level actions accordingly. Third, through incremental learning and acquisition, our approach has a potential to contribute to life-long learning from humans. This paper provides details on the hypothesis space representation, the induction and inference algorithms, as well as experiments and evaluation results. 2 Related Work Our work here is motivated by previous linguistic studies on verbs, action modeling in AI, and recent advances in grounding language to actions. Previous linguistic studies (Hovav and Levin, 2008; Hovav and Levin, 2010) propose action verbs can be divided into two types: manner verbs that “specify as part of their meaning a manner of carrying out an action” (e.g., nibble, rub, laugh, run, swim), and result verbs that “specify the coming about of a result state” (e.g., clean, cover, empty, fill, chop, cut, open, enter). Recent work has shown that explicitly modeling resulting change of state for action verbs can improve grounded language understanding (Gao et al., 2016). Motivated by these studies, this paper focuses on result verbs and uses hypothesis spaces to explicitly represent the result states associated with these verbs. In AI literature on action modeling, action schemas are defined with preconditions and effects. Thus, representing verb semantics for action verbs using resulting states can be connected to the agent’s underlying planning modules. Different from earlier works in the planning community that learn action models from example plans (Wang, 1995; Yang et al., 2007) and from interactions (Gil, 1994), our goal here is to explore the representation of verb semantics and its acquisition through language and action. There has been some work in the robotics community to translate natural language to robotic operations (Kress-Gazit et al., 2007; Jia et al., 2014; Sung et al., 2014; Spangenberg and Henrich, 2015), but not for the purpose of learning new actions. To support action learning, previously we have developed a system where the robot can acquire the meaning of a new verb (e.g., stack) by following human’s step-by-step language instructions (She et al., 2014a; She et al., 2014b). By performing the actions at each step, the robot is able to acquire the desired goal state associated with the new verb. Our empirical results have shown that representing acquired verbs by resulting states allow the robot to plan for primitive actions in novel situations. Moreover, recent work (Misra et al., 2014; Misra et al., 2015) has presented an algorithm for grounding higher-level commands such as “microwave the cup” to lowerlevel robot operations, where each verb lexicon is represented as the desired resulting states. Their empirical evaluations once again have shown the advantage of representing verbs as desired states in robotic systems. Different from these previous works, we represent verb semantics through a hypothesis space of fluents (rather than a single hypothesis). In addition, we present an incremental learning approach for inducing the hypothesis space and selecting the best hypothesis. 3 An Incremental Learning Framework An overview of our incremental learning framework is shown in Figure 1. Given a language 109 Figure 1: An incremental process of verb acquisition (i.e. learning) and application (i.e. inference). command Li (e.g. “fill the cup with water.”) and an environment Ei (e.g. a simulated environment shown in Figure 1), the goal is to identify a sequence of lower-level robotic actions to perform the command. Similar to previous works (Pasula et al., 2007; Mouro et al., 2012), the environment Ei is represented by a conjunction of grounded state fluents, where each fluent describes either the property of an object or relations (e.g. spatial) between objects. The language command Li is first translated to an intermediate representation of grounded verb frame vi through semantic parsing and referential grounding (e.g. for “fill the cup”, the argument the cup is grounded to Cup1 in the scene). The system knowledge of each verb frame (e.g., fill(x)) is represented by a Hypothesis Space H, where each hypothesis (i.e. a node) is a description of possible fluents - or, in other words, resulting states - that are attributed to executing the verb command. Given a verb frame vi and an environment Ei, a Hypothesis Selector will choose an optimal hypothesis from space H to describe the expected resulting state of executing vi in Ei. Given this goal state and the current environment, a symbolic planner such as the STRIPS planner (Fikes and Nilsson, 1971) is used to generate an action sequence for the agent to execute. If the action sequence correctly performs the command (e.g. as evaluated by a human partner), the hypothesis selector will be updated with the success of its prediction. On the other hand, if the action has never been encountered (i.e., the system has no knowledge about this verb and thus the corresponding space is empty) or the predicted action sequence is incorrect, the human partner will provide an action sequence ⃗Ai that can correctly perform command vi in the current environment. Using ⃗Ai as the ground truth information, Figure 2: An example hypothesis space for the verb frame fill(x). The bottom node captures the state changes after executing the fill command in the environment. Anchored by the bottom node, the hypothesis space is generated in a bottom-up fashion. Each node represents a potential goal state. The highlighted nodes are pruned during induction, as they are not consistent with the bottom node. the system will not only update the hypothesis selector, but will also update the existing space of vi. The updated hypothesis space is treated as system knowledge of vi, which will be used in future interaction. Through this procedure, a hypothesis space for each verb frame vi is continually and incrementally updated through human-robot interaction. 4 State Hypothesis Space To bridge human language and robotic actions, previous works have studied representing the semantics of a verb with a single resulting state (She et al., 2014b; Misra et al., 2015). One problem of this representation is that when the verb is applied in a new situation, if any part of the resulting state cannot be satisfied, the symbolic planner will not be able to generate a plan for lower-level actions to execute this verb command. The planner is also not able to determine whether the failed part of state representation is even necessary. In fact, this effect is similar to the over-fitting problem. For example, given a sequence of actions of performing fill(x), the induced hypothesis could be “Has(x, Water) ∧Grasping(x) ∧ In(x, o1)∧¬(On(x, o2))”, where x is a graspable object (e.g. a cup or bowl), o1 is any type of sink, and o2 is any table. However, during inference, when applied to a new situation that does not have any type of sink or table, this hypothesis will not 110 Figure 3: A training instance {Ei, vi, ⃗Ai} for hypothesis space induction. E′ i is the resulting environment of executing ⃗Ai in Ei. The change of state in E′ i compared to Ei is highlighted in bold. Different heuristics generate different Base Hypotheses as shown at the bottom. be applicable. Nevertheless, the first two terms Has(x, Water) ∧Grasping(x) may already be sufficient to generate a plan for completing the verb command. To handle this over-fitting problem, we propose a hierarchical hypothesis space to represent verb semantics, as shown in Figure 2. The space is organized based on a specific-to-general hierarchical structure. Formally, a hypothesis space H for a verb frame is defined as: ⟨N, E⟩, where each ni ∈N is a hypothesis node and each eij ∈E is a directed edge pointing from parent ni to child nj, meaning node nj is more general than ni and has one less constraint. In Figure 2, the bottom hypothesis (n1) is Has(x, Water) ∧Grasping(x) ∧In(x, o1) ∧ ¬(On(x, o2)). A hypothesis ni represents a conjunction of parameterized state fluents lk: ni := ∧lk, and lk := [¬] predk(xk1[, xk2]) A fluent lk is composed of a predicate (e.g. object status: Has, or spatial relation: On) and a set of argument variables. It can be positive or negative. Take the bottom node in Figure 2 as an example, it contains four fluents including one negative term (i.e. ¬(On(x, o2))) and three positive terms. During inference, the parameters will be grounded to the environment to check whether this hypothesis is applicable. 5 Hypothesis Space Induction Given an initial environment Ei, a language command which contains the verb frame vi, and a corresponding action sequence ⃗Ai, {Ei, vi, ⃗Ai} forms a training instance for hypothesis space induction. First, based on different heuristics, a base hypothesis is generated by comparing the state difference between the final and the initial environment. Second, a hypothesis space H is induced on top of this Base Hypothesis in a bottom-up fashion. And during induction some nodes are pruned. Third, if the system has existing knowledge for the same verb frame (i.e. an existing hypothesis space Ht for the same verb frame), this newly induced space will be merged with previous knowledge. Next we explain each step in detail. 5.1 Base Hypothesis Induction One key concept in the space induction is the Base Hypothesis (e.g. the bottom node in Figure 2), which provides a foundation for building a space. As shown in Figure 3, given a verb frame vi and a working environment Ei, the action sequence ⃗Ai given by a human will change the initial environment Ei to a final environment E′ i. The state changes are highlighted in Figure 3. Suppose a state change can be described by n fluents. Then the first question is which of these n fluents should be included in the base hypothesis. To gain some understanding on what would be a good representation, we applied different heuristics of choosing fluents to form a base hypothesis as shown in Figure 3: • H1argonly: only includes the changed states associated with the argument objects specified in the frame (e.g., in Figure 3, Kettle1 is the only argument). • H2manip: includes the changed states of all the objects that have been manipulated in the action sequence taught by the human. • H3argrelated: includes the changed states of all the objects related to the argument objects in the final environment. An object o is considered as “related to” an argument object if there is a state fluent that includes both o and an argument object in one predicate. (e.g. Stove is related to the argument object Kettle1 through On(Kettle1, Stove)). 111 Input: A Base Hypothesis h Initialization: Set initial space H : ⟨N, E⟩with N:[h] and E:[ ], Set a set of temporary hypotheses T:[h] while T is not empty do Pop an element t from T Generate children [t(0),...,t(k)] from t by removing each single fluent foreach i = 0 ... k do if t(i) is consistent with t then Append t(i) to T; Add t(i) to N if not already in; Add link t →t(i) to E if not already in; else Prune t(i) and any node that can be generalized from t(i) end end end Output: Hypothesis space H Algorithm 1: A single hypothesis space induction algorithm. H is a space initialized with a base hypothesis and an empty set of links. T is a temporary container of candidate hypotheses. • H4all: includes all the fluents whose values are changed from Ei to E′ i (e.g. all the four highlighted state fluents in E′ i). 5.2 Single Space Induction First we define the consistency between two hypotheses: Definition. Hypotheses h1 and h2 are consistent, if and only if the action sequence ⃗A1 generated from a symbolic planner based on goal state h1 is exactly the same as the action sequence ⃗A2 generated based on goal state h2. Given a base hypothesis, the space induction process is a while-loop generalizing hypotheses in a bottom-up fashion, which stops when no hypotheses can be further generalized. As shown in Algorithm 1, a hypothesis node t can firstly be generalized to a set of immediate children [t(0),...,t(k)] by removing a single fluent from t. For example, the base hypothesis n1 in Figure 2 is composed of 4 fluents, such that 4 immediate children nodes can potentially be generated. If a child node t(i) is consistent with its parent t (i.e. determined based on the consistency defined previously), node t(i) and a link t →t(i) are added to the space H. The node t(i) is also added to a temporary hypothesis container waiting to be further generalized. On the other hand, some children hypotheses can be inconsistent with their parents. For example, the gray node (n2) in Figure 2 is a child node that is inconsistent with its parent (n1). As n2 does not explicitly specify Has(x, Water) as part of its goal state, the symbolic planner generates less steps to achieve goal state n2 than goal state n1. This implies that the semantics of achieving n2 may be different than those for achieving n1. Such hypotheses that are inconsistent with their parents are pruned. In addition, if t(i) is inconsistent with its parent t, any children of t(i) are also inconsistent with t (e.g. children of n2 in Figure 2 are also gray nodes, meaning they are inconsistent with the base hypothesis). Through pruning, the size of entire space can be greatly reduced. In the resulting hypothesis space, every single hypothesis is consistent with the base hypothesis. By only keeping consistent hypotheses via pruning, we can remove fluents that are not representative of the main goal associated with the verb. 5.3 Space Merging If the robot has existing knowledge (i.e. hypothesis space Ht) for a verb frame, the induced hypothesis space H from a new instance of the same verb will be merged with the existing space Ht. Currently, a new space Ht+1 is generated where the nodes of Ht+1 are the union of H and Ht, and links in Ht+1 are generated by checking the parent-child relationship between nodes. In future work, more space merging operations will be explored, and human feedback will be incorporated into the induction process. 6 Hypothesis Selection Hypothesis selection is applied when the agent intends to execute a command. Given a verb frame extracted from the language command, the agent will first select the best hypothesis (describing the goal state) from the existing knowledge base, and then apply a symbolic planner to generate an action sequence to achieve the goal. In our framework, the model of selecting the best hypothesis is incrementally learned throughout continuous interaction with humans. More specifically, given a correct action sequence (whether performed by the robot or provided by the human), a regression model is trained to capture the fitness of a hypothesis given a particular situation. Inference: Given a verb frame vi and a working environment Ei, the goal of inference is to estimate how well each hypothesis hk from a space Ht describes the expected result of performing vi 112 in Ei. The best fit hypothesis will be used as the goal state to generate the action sequence. Specifically, the “goodness” of describing command vi with hypothesis hk in environment Ei is formulated as follows: f(hk | vi; Ei; Ht) = W T · Φ(hk, vi, Ei, Ht) (1) where Φ(hk, vi, Ei, Ht) is a feature vector capturing multiple aspects of relations between hk, vi, Ei and Ht as shown in Table 1; and W captures the weight associated with each feature. Example global features include whether the candidate goal hk is in the top level of entire space Ht and whether hk has the highest frequency. Example local features include if most of the fluents in hk are already satisfied in current scene Ei (as this hk is unlikely to be a desired goal state). The features also include whether the same verb frame vi has been performed in a similar scene during previous interactions, as the corresponding hypotheses induced during that experience are more likely to be relevant and are thus preferred. Parameter Estimation: Given an action sequence ⃗Ai that illustrates how to correctly perform command vi in environment Ei during interaction, the model weights will be incrementally updated with1: Wt+1 = Wt −η(α∂R(Wt) ∂Wt + ∂L(Jki, fki) ∂Wt ) where fki := f(hk|vi; Ei; Ht) is defined in Equation 1. Jki is the dependent variable the model should approximate, where Jki := J(si, hk) is the Jaccard Index (details in Section 7) between hypothesis hk and a set of changed states si (i.e. the changed states of executing the illustration action sequence ⃗Ai in current environment). L(Jki, fki) is a squared loss function. αR(Wt) is the penalty term, and η is the constant learning rate. 7 Experiment Setup Dataset Description. To evaluate our approach, we applied the dataset made available by (Misra et al., 2015). To support incremental learning, each utterance from every original paragraph is extracted so that each command/utterance only contains one verb and its arguments. The corresponding initial environment and an action sequence 1The SGD regressor in the scikit-learn (Pedregosa et al., 2011) is used to perform the linear regression with L2 regularization. Features on candidate hypothesis hk and the space Ht 1. If hk belongs to the top level of Ht. 2. If hk has the highest frequency in Ht. Features on hk and current situation Ei 3. Portion of fluents in hk that are already satisfied by Ei. 4. Portion of non-argument objects in hk. Examples of non-argument objects are o1 and o2 in Figure 2. Features on relations between a testing verb frame vi and previous interaction experience 5. Whether the same verb frame vi has been executed previously with the same argument objects. 6. Similarities between noun phrase descriptions used in current command and commands from interaction history. Table 1: Current features used for incremental learning of the regression model. The first two are binary features and the rest are real-valued features. taught by a human for each command are also extracted. An example is shown in Figure 3, where Li is a language command, Ei is the initial working environment, and ⃗Ai is a sequence of primitive actions to complete the command given by the human. In the original data, some sentences are not aligned with any actions, and thus cannot be used for either the learning or the evaluation. Removing these unaligned sentences resulted in a total of 991 data instances, including 165 different verb frames. Among the 991 data instances, 793 were used for incremental learning (i.e., space induction and hypothesis selector learning). Specifically, given a command, if the robot correctly predicts an action sequence2, this correct prediction is used to update the hypothesis selector. Otherwise, the agent will require a correct action sequence from the human, which is used for hypothesis space induction as well as updating the hypothesis selector. The hypothesis spaces and regression based selectors acquired at each run were evaluated on the other 20% (198) testing instances. Specifically, for each testing instance, the induced space and the hypothesis selector were applied to identify a desired goal state. Then a symbolic planner3 was applied to predict an action sequence ⃗A(p) based on this predicted goal state. We then compared ⃗A(p) with the ground truth action sequence ⃗A(g) using the following two metrics. • IED (Instruction Edit Distance) measures 2Currently, a prediction is considered correct if the predicted result (c(p)) is similar to a human labeled action sequence (c(g)) (i.e., SJI(c(g), c(p)) > 0.5). 3The symbolic planner implemented by (Rintanen, 2012) was utilized to generate action sequences. 113 (a) IED results for different configurations (b) SJI results for different configurations Figure 4: The overall performance on the testing set with different configurations in generating the base hypothesis and in hypothesis selection. Each configuration runs five times by randomly shuffling the order of learning instances, and the averaged performance is reported. The result from Misra2015 is shown as a line. Results that are statistically significant better than Misra2015 are marked with ∗(paired t-test, p< 0.05). similarity between the ground truth action sequence ⃗A(g) and the predicted sequence ⃗A(p). Specifically, the edit distance d between two action sequences ⃗A(g) and ⃗A(p) is first calculated. Then d is rescaled as IED = 1 − d/max( ⃗A(g), ⃗A(p)), such that IED ranges from 0 to 1 and a larger IED means the two sequences are more similar. • SJI (State Jaccard Index). Because different action sequences could lead to a same goal state, we also use Jaccard Index to check the overlap between the changed states. Specifically, executing the ground truth action sequence ⃗A(g) in the initial scene Ei results in a final environment E′ i. Suppose the changed states between Ei and E′ i is c(g). For the predicted action sequence, we can calculate another set of changed states c(p). The Jaccard Index between c(g) and c(p) is evaluated, which also ranges from 0 to 1 and a larger SJI means the predicted state changes are more similar to the ground truth. Configurations. We also compared the results of using the regression based selector to select a hypothesis (i.e., RegressionBased) with the following different strategies for selecting the hypothesis: • Misra2015: The state of the art system reported in (Misra et al., 2015) on the command/utterance level evaluation4. 4We applied the same system described in (Misra et al., 2015) to predict action sequences. The only difference is here we report the performance at the command level, not at the paragraph level. • MemoryBased: Given the induced space, only the base hypotheses hks from each learning instances are used. Because these hks don’t have any relaxation, they represent purely learning from memorization. • MostGeneral: In this case, only those hypotheses from the top level of the hypothesis space are used, which contain the least number of fluents. These nodes are the most relaxed hypotheses in the space. • MostFrequent: In this setting, the hypotheses that are most frequently observed in the learning instances are used. 8 Results 8.1 Overall performance The results of the overall performance across different configurations are shown in Figure 4. For both of the IED and SJI (i.e. Figure 4(a) and Figure 4(b)), the hypothesis spaces with the regression model based hypothesis selector always achieve the best performance across different configurations, and outperforms the previous approach (Misra et al., 2015). For different base hypothesis induction strategies, the H4all considering all the changed states achieves the best performance across all configurations. This is because H4all keeps all of the state change information compared with other heuristics. The performance of H2manip is similar to H4all. The reason is that, when all the manipulated objects are considered, the resulted set of changed states will cover most of the fluents in H4all. On the other dimension, 114 (a) Use regression based selector to select hypothesis, and compare each base hypothesis induction heuristics. (b) Induce the base hypothesis with H4all, and compare different hypothesis selection strategies. Figure 5: Incremental learning results. The spaces and regression models acquired at different incremental learning cycles are evaluated on testing set. The averaged Jaccard Index is reported. the regression based hypothesis selector achieves the best performance and the MemoryBased strategy has the lowest performance. Results for MostGeneral and MostFrequent are between the regression based selector and MemoryBased. 8.2 Incremental Learning Results Figure 5 presents the incremental learning results on the testing set. To better present the results, we show the performance based on each learning cycle of 40 instances. The averaged Jaccard Index (SJI) is reported. Specifically, Figure 5(a) shows the results of configurations comparing different base hypothesis induction heuristics using regression model based hypothesis selection. After using 200 out of 840 (23.8%) learning instances, all the four curves achieve more than 80% of the overall performance. For example, for the heuristic H4all, the final average Jaccard Index is 0.418. When 200 instances are used, the score is 0.340 (0.340/0.418≈81%). The same number holds for the other heuristics. After 200 instances, H4all and H2manip consistently achieve better performance than H1argonly and H3argrelated. This result indicates that while change of states mostly affect the arguments of the verbs, other state changes in the environment cannot be ignored. Modeling them actually leads to better performance. Using H4all for base hypothesis induction, Figure 5(b) shows the results of comparing different hypothesis selection strategies. The regression model based selector always outperforms other selection strategies. 8.3 Results on Frequently Used Verb Frames Beside overall evaluation, we have also taken a closer look at individual verb frames. Most of the Figure 6: Incremental evaluation for individual verb frames. Four frequently used verb frames are examined: place(x, y), put(x, y), take(x), and turn(x). X-axis is the number of incremental learning instances, and Y-axis is the averaged SJI computed with H4all base hypothesis induction and regression based hypothesis selector. verb frames in the data have a very low frequency, which cannot produce statistically significant results. So we only selected verb frames with frequency larger than 40 in this evaluation. For each verb frame, 60% data are used for incremental learning and 40% are for testing. For each frame, a regression based selector is trained separately. The resulting SJI curves are shown in Figure 6. As shown in Figure 6, all the four curves become steady after 8 learning instances are used. However, while some verb frames have final SJIs of more than 0.55 (i.e. take(x) and turn(x)), others have relatively lower results (e.g. results for put(x, y) are lower than 0.4). After examining the learning instances for put(x, y), we found these data are more noisy than the training data for other frames. One source of errors is the incorrect object grounding results. For example, a problematic 115 training instance is “put the pillow on the couch”, where the object grounding module cannot correctly ground the “couch” to the target object. As a result, the changed states of the second argument (i.e. the “couch”) are incorrectly identified, which leads to incorrect prediction of desired states during inference. Another common error source is from automated parsing of utterances. The action frames generated from the parsing results could be incorrect in the first place, which would contribute to a hypothesis space for a wrong frame. These different types of errors are difficult to be recognized by the system itself. This points to the future direction of involving humans in a dialogue to learn a more reliable hypothesis space for verb semantics. 9 Conclusion This paper presents an incremental learning approach that represents and acquires semantics of action verbs based on state changes of the environment. Specifically, we propose a hierarchical hypothesis space, where each node in the space describes a possible effect on the world from the verb. Given a language command, the induced hypothesis space, together with a learned hypothesis selector, can be applied by the agent to plan for lower-level actions. Our empirical results have demonstrated a significant improvement in performance compared to a previous leading approach. More importantly, as our approach is based on incremental learning, it can be potentially integrated in a dialogue system to support life-long learning from humans. Our future work will extend the current approach with dialogue modeling to learn more reliable hypothesis spaces of resulting states for verb semantics. Acknowledgments This work was supported by IIS-1208390 and IIS-1617682 from the National Science Foundation. The authors would like to thank Dipendra K. Misra and colleagues for providing the evaluation data, and the anonymous reviewers for valuable comments. References Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, Volume1(1):49– 62. R. Cantrell, K. Talamadupula, P. Schermerhorn, J. Benton, S. Kambhampati, and M. Scheutz. 2012. Tell me when and why to do it! run-time planner model updates via natural language instruction. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI’12), pages 471–478, Boston, Massachusetts, USA, March. David L Chen and Raymond J Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI2011), pages 859–865, San Francisco, California, USA, August. Richard E. Fikes and Nils J. Nilsson. 1971. Strips: A new approach to the application of theorem proving to problem solving. In Proceedings of the 2nd International Joint Conference on Artificial Intelligence (IJCAI’71), pages 608–620, London, England. Qiaozi Gao, Malcolm Doering, Shaohua Yang, and Joyce Y. Chai. 2016. Physical causality of action verbs in grounded language understanding. In In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany. Yolanda Gil. 1994. Learning by experimentation: incremental refinement of incomplete planning domains. In Prococeedings of the Eleventh International Conference on Machine Learning (ICML’94), pages 87–95, New Brunswick, NJ, USA. P. Gorniak and D. Roy. 2007. Situated language understanding as filtering perceived affordances. Cognitive Science, Volume31(2):197–231. Malka Rappaport Hovav and Beth Levin. 2008. Reflections on manner/result complementarity. Lecture notes. Malka Rappaport Hovav and Beth Levin. 2010. Reflections on Manner / Result Complementarity. Lexical Semantics, Syntax, and Event Structure, pages 21–38. Yunyi Jia, Ning Xi, Joyce Y. Chai, Yu Cheng, Rui Fang, and Lanbo She. 2014. Perceptive feedback for natural language control of robotic operations. In 2014 IEEE International Conference on Robotics and Automation, ICRA 2014, Hong Kong, China, May 31 June 7, 2014, pages 6673–6678. Joohyun Kim and Raymond J. Mooney. 2012. Unsupervised pcfg induction for grounded language learning with highly ambiguous supervision. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL ’12), pages 433– 444, Jeju Island, Korea. 116 Hadas Kress-Gazit, Georgios E Fainekos, and George J Pappas. 2007. From structured english to robot motion. In Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pages 2717–2722. Changsong Liu and Joyce Y. Chai. 2015. Learning to mediate perceptual differences in situated humanrobot dialogue. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI’15), pages 2288–2294, Austin, Texas, USA. Changsong Liu, Lanbo She, Rui Fang, and Joyce Y. Chai. 2014. Probabilistic labeling for efficient referential grounding based on collaborative discourse. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics ACL’14 (Volume 2: Short Papers), pages 13–18, Baltimore, MD, USA. Dipendra Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. 2014. Tell me dave: Contextsensitive grounding of natural language to mobile manipulation instructions. In Proceedings of Robotics: Science and Systems (RSS’14), Berkeley, US. Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lexicon induction for high-level instructions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing ACL-IJCNLP’15 (Volume 1: Long Papers), pages 992–1002, Beijing, China. Shiwali Mohan, James Kirk, and John Laird. 2013. A computational model for situated task learning with interactive instruction. In Proceedings of the International conference on cognitive modeling (ICCM’13). Kira Mouro, Luke S. Zettlemoyer, Ronald P. A. Petrick, and Mark Steedman. 2012. Learning strips operators from noisy and incomplete observations. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI’12), pages 614–623, Catalina Island, CA, USA. Hanna M Pasula, Luke S Zettlemoyer, and Leslie Pack Kaelbling. 2007. Learning symbolic models of stochastic domains. Journal of Artificial Intelligence Research, Volume29:309–352. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, Volume12:2825–2830. Jussi Rintanen. 2012. Planning as satisfiability: Heuristics. Artificial Intelligence, Volume193:45– 86. Lanbo She, Yu Cheng, Joyce Y. Chai, Yunyi Jia, Shaohua Yang, and Ning Xi. 2014a. Teaching robots new actions through natural language instructions. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, IEEE RO-MAN’14, pages 868–873, Edinburgh, UK. Lanbo She, Shaohua Yang, Yu Cheng, Yunyi Jia, Joyce Y. Chai, and Ning Xi. 2014b. Back to the blocks world: Learning new actions through situated human-robot dialogue. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 89– 97, Philadelphia, PA, U.S.A., June. Association for Computational Linguistics. M. Spangenberg and D. Henrich. 2015. Grounding of actions based on verbalized physical effects and manipulation primitives. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pages 844–851, Hamburg, Germany. Jaeyong Sung, Bart Selman, and Ashutosh Saxena. 2014. Synthesizing manipulation sequences for under-specified tasks using unrolled markov random fields. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’14), pages 2970–2977, Chicago, IL, USA. Stefanie Tellex, Pratiksha Thaker, Joshua Joseph, and Nicholas Roy. 2014. Learning perceptually grounded word meanings from unaligned parallel data. Machine Learning, Volume94(2):151–167. Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. 2015. Learning to interpret natural language commands through human-robot dialog. In Proceedings of the 2015 International Joint Conference on Artificial Intelligence (IJCAI), pages 1923– 1929, Buenos Aires, Argentina. Xuemei Wang. 1995. Learning by observation and practice: An incremental approach for planning operator acquisition. In Proceedings of the Twelfth International Conference on Machine Learning (ICML’95), pages 549–557, Tahoe City, California, USA. Qiang Yang, Kangheng Wu, and Yunfei Jiang. 2007. Learning action models from plan examples using weighted max-sat. Artificial Intelligence, Volume171(23):107 – 143. Shaohua Yang, Qiaozi Gao, Changsong Liu, Caiming Xiong, Song-Chun Zhu, and Joyce Y. Chai. 2016. Grounded semantic role labeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL’16), San Diego, California. 117
2016
11
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1158–1169, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics ALTO: Active Learning with Topic Overviews for Speeding Label Induction and Document Labeling Forough Poursabzi-Sangdeh Computer Science University of Colorado [email protected] Jordan Boyd-Graber Computer Science University of Colorado [email protected] Leah Findlater iSchool and UMIACS University of Maryland [email protected] Kevin Seppi Computer Science Brigham Young University [email protected] Abstract Effective text classification requires experts to annotate data with labels; these training data are time-consuming and expensive to obtain. If you know what labels you want, active learning can reduce the number of labeled documents needed. However, establishing the label set remains difficult. Annotators often lack the global knowledge needed to induce a label set. We introduce ALTO: Active Learning with Topic Overviews, an interactive system to help humans annotate documents: topic models provide a global overview of what labels to create and active learning directs them to the right documents to label. Our forty-annotator user study shows that while active learning alone is best in extremely resource limited conditions, topic models (even by themselves) lead to better label sets, and ALTO’s combination is best overall. 1 Introduction Many fields depend on texts labeled by human experts; computational linguistics uses such annotation to determine word senses and sentiment (Kelly and Stone, 1975; Kim and Hovy, 2004); while social science uses “coding” to scale up and systemetize content analysis (Budge, 2001; Klingemann et al., 2006). Classification takes these labeled data as a training set and labels new data automatically. Creating a broadly applicable and consistent label set that generalizes well is time-consuming and difficult, requiring expensive annotators to examine large swaths of the data. Effective NLP systems must measure (Hwa, 2004; Osborne and Baldridge, 2004; Ngai and Yarowsky, 2000) and reduce annotation cost (Tomanek et al., 2007). Annotation is hard because it requires both global and local knowledge of the entire dataset. Global knowledge is required to create the set of labels, and local knowledge is required to annotate the most useful examples to serve as a training set for an automatic classifier. The former’s cost is often hidden in multiple rounds of refining annotation guidelines. We create a single interface—ALTO (Active Learning with Topic Overviews)—to address both global and local challenges using two machine learning tools: topic models and active learning (we review both in Section 2). Topic models address the need for annotators to have a global overview of the data, exposing the broad themes of the corpus so annotators know what labels to create. Active learning selects documents that help the classifier understand the differences between labels and directs the user’s attention locally to them. We provide users four experimental conditions to compare the usefulness of a topic model or a simple list of documents, with or without active learning suggestions (Section 3). We then describe our data and evaluation metrics (Section 4). Through both synthetic experiments (Section 5) and a user study (Section 6) with forty participants, we evaluate ALTO and its constituent components by comparing results from the four conditions introduced above. We first examine user strategies for organizing documents, user satisfaction, and user efficiency. Finally, we evaluate the overall effectiveness of the label set in a post study crowdsourced task. 1158 Topic words Document Title metropolitan, carrier, rail, freight, passenger, driver, airport, traffic, transit, vehicles A bill to improve the safety of motorcoaches, and for other purposes. violence, sexual, criminal, assault, offense, victims, domestic, crime, abuse, trafficking A bill to provide criminal penalties for stalking. agricultural, farm, agriculture, rural, producer, dairy, crop, producers, commodity, nutrition To amend the Federal Crop Insurance Act to extend certain supplemental agricultural disaster assistance programs through fiscal year 2017, and for other purposes. Table 1: Given a dataset—in this case, the US congressional bills dataset—topics are automatically discovered sorted lists of terms that summarize segments of a document collection. Topics also are associated with documents. These topics give users a sense of documents’ main themes and help users create high-quality labels. 2 Topic Overviews and Active Learning ALTO,1 a framework for assigning labels to documents that uses both global and local knowledge to help users create and assign document labels, has two main components: topic overview and active learning selection. We explain how ALTO uses topic models and active learning to aid label induction and document labeling. Topic Models Topic models (Blei et al., 2003) automatically induce structure from a text corpus. Given a corpus and a constant K for the number of topics, topic models output (i) a distribution over words for each topic k (φk,w) and (ii) a distribution over topics for each document (θd,k). Each topic’s most probable words and associated documents can help a user understand what the collection is about. Table 1 shows examples of topics and their highest associated documents from our corpus of US congressional bills. Our hypothesis is that showing documents grouped by topics will be more effective than having the user wade through an undifferentiated list of random documents and mentally sort the major themes themselves. Active Learning Active learning (Settles, 2012) directs users’ attention to the examples that would 1Code available at https://github.com/ Foroughp/ALTO-ACL-2016 be most useful to label when training a classifier. When user time is scarce, active learning builds a more effective training set than random labeling: uncertainty sampling (Lewis and Gale, 1994) or query by committee (Seung et al., 1992) direct users to the most useful documents to label. In contrast to topic models, active learning provides local information: this document is the one you should pay attention to. Our hypothesis is that active learning directing users to documents most beneficial to label will not only be more effective than randomly selecting documents but will also complement the global information provided by topic models. Section 3.3 describes our approaches for directing user’s local attention. 3 Study Conditions Our goal is to characterize how local and global knowledge can aid users in annotating a dataset. This section describes our four experimental conditions and outlines the user’s process for labeling documents. 3.1 Study Design The study uses a 2 × 2 between-subjects design, with factors of document collection overview (two levels: topic model or list) and document selection (two levels: active or random). The four conditions, with the TA condition representing ALTO, are: 1. Topic model overview, active selection (TA) 2. Topic model overview, random selection (TR) 3. List overview, active selection (LA) 4. List overview, random selection (LR) 3.2 Document Collection Overview The topic and list overviews offer different overall structure but the same basic elements for users to create, modify, and apply labels (Section 3.4). The topic overview (Figure 1a) builds on Hu et al. (2014): for each topic, the top twenty words are shown alongside twenty document titles. Topic words (w) are sized based on their probability φk,w in the topic k and the documents with the highest probability of that topic (θd,k) are shown. The list overview, in contrast, presents documents as a simple, randomly ordered list of titles (Figure 1b). We display the same number of documents (20K, where K is the total number of topics) in both the topic model and list overviews, but the list overview provides no topic information. 1159 Main Interface (a) Topic Overview (TA and TR) (b) List Overview (LA and LR) OR Figure 1: Our annotation system. Initially, the user sees lists of documents organized in either a list format or grouped into topics (only two topics are shown here; users can scroll to additional documents). The user can click on a document to label it. Classifier Label (if available) Raw Text User Label Figure 2: After clicking on a document from the list or topic overview, the user inspects the text and provides a label. If the classifier has a guess at the label, the user can confirm the guess. 3.3 Document Selection We use a preference function U to direct users’ attention to specific documents. To provide consistency across the four conditions, each condition will highlight the document that scores the highest for the condition’s preference function. For the random selection conditions, TR and LR, document selection is random, within a topic or globally. We expect this to be less useful than active learning. The document preference functions are: User-Labeled Documents Classifier-Labeled Documents Selected Document Figure 3: After the user has labeled some documents, the system can automatically label other documents and select which documents would be most helpful to annotate next. In the random selection setting, random documents are selected. LA: LA uses traditional uncertainty sampling: U LA d = HC [Yd] , (1) where HC [yd] = −P i P(yi|d)logP(yi|d) is the classifier entropy. Entropy measures how confused (uncertain) classifier C is about its prediction of a document d’s label y. Intuitively, it prefers documents the classifier suggests many labels instead of a single, confident prediction. LR: LR’s approach is the same as LA’s except we replace HC [yd] with a uniform random number: U LR d ∼unif(0, 1). (2) In contrast to LA, which suggests the most uncertain document, LR suggests a random document. 1160 TA: Dasgupta and Hsu (2008) argue that clustering should inform active learning criteria, balancing coverage against classifier accuracy. We adapt their method to flat topic models—in contrast to their hierarchical cluster trees—by creating a composite measure of document uncertainty within a topic: U TA d = HC [yd] θd,k, (3) where k is the prominent topic for document d. U TA d prefers documents that are representative of a topic (i.e., have a high value of θd,k for that topic) and are informative for the classifier. TR: TR’s approach is the same as TA’s except we replace HC [Yd] with a uniformly random number: U TR d = unif(0, 1)θd,k. (4) Similar to TA, U TR d prefers documents that are representative of a topic, but not any particular document in the topic. Incorporating the random component encourages covering different documents in diverse topics. In LA and LR, the preference function directly chooses a document and directs the user to it. On the other hand, U TA d and U TR d are topic dependent. TA emphasizes documents that are both informative to the classifier and representative of a topic; if a document is not representative, the surrounding context of a topic will be less useful. Therefore, the factor θd,k appears in both. Thus, they require that a topic be chosen first and then the document with maximum preference, U, within that topic can be chosen. In TR, the topic is chosen randomly. In TA, the topic is chosen by k∗= arg max k (mediand(HC [yd] θd,k). (5) That is the topic with the maximum median U. Median encodes how “confusing” a topic is.2 In other words, topic k∗is the topic that its documents confuse the classifier most. 3.4 User Labeling Process The user’s labeling process is the same in all four conditions. The overview (topic or list) allows users to examine individual documents (Figure 1). Clicking on a document opens a dialog box (Figure 2) with the text of the document and three options: 1. Create and assign a new label to the document. 2. Choose an existing label for the document. 2Outliers skew other measures (e.g., max or mean). 3. Skip the document. Once the user has labeled two documents with different labels, the displayed documents are replaced based on the preference function (Section 3.3), every time the user labels (or updates labels for) a document. In TA and TR, each topic’s documents are replaced with the twenty highest ranked documents. In LA and LR, all documents are updated with the top 20K ranked documents.3 The system also suggests one document to consider by auto-scrolling to it and drawing a red box around its title (Figure 3). The user may ignore that document and click on any other document. After the user labels ten documents, the classifier runs and assigns labels to other documents.4 For classifier-labeled documents, the user can either approve the label or assign a different label. The process continues until the user is satisfied or a time runs out (forty minutes in our user study, Section 6). We use time to control for the varying difficulty of assigning document labels: active learning will select more difficult documents to annotate, but they may be more useful; time is a more fair basis of comparison in real-world tasks. 4 Data and Evaluation Metrics In this section, we describe our data, the machine learning techniques to learn classifiers from examples, and the evaluation metrics to know whether the final labeling of the complete documents collection was successful. 4.1 Datasets Data Our experiments require corpora to compare user labels with gold standard labels. We experiment with two corpora: 20Newsgroups (Lang, 2007) and US congressional bills from GovTrack.5 For US congressional bills, GovTrack provides bill information such as the title and text, while the Congressional Bills Project (Adler and Wilkerson, 2006) provides labels and sub-labels for the bills. Examples of labels are agriculture and health, while sub-labels include agricultural trade and comprehensive health care reform. The twenty 3In all conditions, the number of displayed unlabeled documents is adjusted based on the number of manually labeled documents. i.e. if the user has labeled n documents in topic k, n manually labeled documents followed by top 20 −n uncertain documents will be shown in topic k. 4To reduce user confusion, for each existing label, only the top 100 documents get a label assigned in the UI. 5https://www.govtrack.us/ 1161 top-level labels have been developed by consensus over many years by a team of top political scientists to create a reliable, robust dataset. We use the 112th Congress; after filtering,6 this dataset has 5558 documents. We use this dataset in both the synthetic experiments (Section 5) and the user study (Section 6). The 20 Newsgroups corpus has 19, 997 documents grouped in twenty news groups that are further grouped into six more general topics. Examples are talk.politics.guns and sci.electronics, which belong to the general topics of politics and science. We use this dataset in synthetic experiments (Section 5). 4.2 Machine Learning Techniques Topic Modeling To choose the number of topics (K), we calculate average topic coherence (Lau et al., 2014) on US Congressional Bills, between ten and forty topics and choose K = 19, as it has the maximum coherence score. For consistency, we use the same number of topics (K = 19) for 20 Newsgroups corpus. After filtering words based on TF-IDF, we use Mallet (McCallum, 2002) with default options to learn topics. Features and Classification A logistic regression predicts labels for documents and provides the classification uncertainty for active learning. To make classification and active learning updates efficient, we use incremental learning (Carpenter, 2008, LingPipe). We update classification parameters using stochastic gradient descent, restarting with the previously learned parameters as new labeled documents become available.7 We use cross validation, using argmax topics as surrogate labels, to set the parameters for learning the classifier.8 The features for classification include topic probabilities, unigrams, and the fraction of labeled documents in each document’s prominent topic. The intuition behind adding this last feature is to allow active learning to suggest documents in a diverse 6We remove bills that have less than fifty words, no assigned gold label, duplicate titles, or have the gold label GOVERNMENT OPERATIONS or SOCIAL WELFARE, which are broad and difficult for users to label. 7Exceptions are when a new label is added, a document’s label is deleted, or a label is deleted. In those cases, we train the classifier from scratch. Also, for final results in Section 6, we train a classifier from scratch. 8We use blockSize= 1 #examples minEpochs=100, learningRate=0.1, minImprovement=0.01, maxEpochs=1000, and rollingAverageSize=5. The regression is unregularized. range of topics if it finds this feature a useful indicator of uncertainty.9 4.3 Evaluation Metrics Our goal is to create a system that allows users to quickly induce a high-quality label set. We compare the user-created label sets against the data’s gold label sets. Comparing different clusterings is a difficult task, so we use three clustering evaluation metrics: purity (Zhao and Karypis, 2001), rand index (Rand, 1971, RI), and normalized mutual information (Strehl and Ghosh, 2003, NMI).10 Purity The documents labeled with a good user label should only have one (or a few) gold labels associated with them: this is measured by cluster purity. Given each user cluster, it measures what fraction of the documents in a user cluster belong to the most frequent gold label in that cluster: purity(Ω, G) = 1 N X l max j |Ωl ∩Gj|, (6) where L is the number of labels user creates, Ω= {Ω1, Ω2, . . . , ΩL} is the user clustering of documents, G = {G1, G2, . . . , GJ} is gold clustering of documents, and N is the total number of documents. The user Ωl and gold Gj labels are interpreted as sets containing all documents assigned to that label. Rand index (RI) RI is a pair counting measure, where cluster evaluation is considered as a series of decisions. If two documents have the same gold label and the same user label (TP) or if they do not have the same gold label and are not assigned the same user label (TN), the decision is right. Otherwise, it is wrong (FP, FN). RI measures the percentage of decisions that are right: RI = TP + TN TP + FP + TN + FN . (7) Normalized mutual information (NMI) NMI is an information theoretic measure that measures the amount of information one gets about the gold clusters by knowing what the user clusters are: NMI(Ω, G) = 2I(Ω, G) HΩ+ HG , (8) 9However, final classifier’s coefficients suggested that this feature did not have a large effect. 10We avoided using adjusted rand index (Hubert and Arabie, 1985), because it can yield negative values, which is not consistent with purity and NMI. We also computed variation of information (Meil˘a, 2003) and normalized information distance (Vit´anyi et al., 2009) and observed consistent trends. We omit these results for the sake of space. 1162 where Ωand G are user and gold clusters, H is the entropy and I is mutual information (Bouma, 2009). While purity, RI, and NMI are all normalized within [0, 1] (higher is better), they measure different things. Purity measures the intersection between two clusterings, it is sensitive to the number of clusters, and it is not symmetric. On the other hand, RI and NMI are less sensitive to the number of clusters and are symmetric. RI measures pairwise agreement in contrast to purity’s emphasis on intersection. Moreover, NMI measures shared information between two clusterings. None of these metrics are perfect: purity can be exploited by putting each document in its own label, RI does not distinguish separating similar documents with distinct labels from giving dissimilar documents the same label, and NMI’s ability to compare different numbers of clusters means that it sometimes gives high scores for clusterings by chance. Given the diverse nature of these metrics, if a labeling does well in all three of them, we can be relatively confident that it is not a degenerate solution that games the system. 5 Synthetic Experiments Before running a user study, we test our hypothesis that topic model overviews and active learning selection improve final cluster quality compared to standard baselines: list overview and random selection. We simulate the four conditions on Congressional Bills and 20 Newsgroups. Since we believe annotators create more specific labels compared to the gold labels, we use sublabels as simulated user labels and labels as gold labels (we give examples of labels and sub-labels in Section 4.1). We start with two randomly selected documents that have different sub-labels, assign the corresponding sub-labels, then add more labels based on each condition’s preference function (Section 3.3). We follow the condition’s preference function and incrementally add labels until 100 documents have been labeled (100 documents are representative of what a human can label in about an hour). Given these labels, we compute purity, RI, and NMI over time. This procedure is repeated fifteen times (to account for the randomness of initial document selections and the preference functions with randomness).11 11Synthetic experiment data available at http: //github.com/Pinafore/publications/tree/ Congress (Synth) Newsgroups (Synth) G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0.4 0.6 0.8 0.75 0.80 0.85 0.90 0.2 0.3 0.4 0.5 purity RI NMI 25 50 75 100 25 50 75 100 Documents Labeled Median (over 15 runs) condition G LA LR TA TR Figure 4: Synthetic results on US Congressional Bills and 20 Newsgroups data sets. Topic models help guide annotation attention to diverse segments of the data. Synthetic results validate our hypothesis that topic overview and active learning selection can help label a corpus more efficiently (Figure 4). LA shows early gains, but tends to falter eventually compared to both topic overview and topic overview combined with active learning selection (TR and TA). However, these experiments do not validate ALTO. Not all documents require the same time or effort to label, and active learning focuses on the hardest examples, which may confuse users. Thus, we need to evaluate how effectively actual users annotate a collection’s documents. 6 User Study Following the synthetic experiments, we conduct a user study with forty participants to evaluate ALTO (TA condition) against three alternatives that lack topic overview (LA), active learning selection (TR), or both (LR) (Sections 6.1 and 6.2). Then, we conduct a crowdsourced study to compare the overall master/2016_acl_doclabel/data/synthetic_ exp 1163 G G G G G G G G G G G G G G G G G G G G G G G G 0.2 0.3 0.4 0.5 0.70 0.75 0.80 0.85 0.90 0.1 0.2 0.3 Purity RI NMI 10 20 30 40 Elapsed Time (min) Median (over participants) condition G LA LR TA TR Figure 5: User study results on US Congressional Bills dataset. Active learning selection helps initially, but the combination of active learning selection and topic model overview has highest quality labels by the end of the task. effectiveness of the label set generated by the participants in the four conditions (Section 6.3). 6.1 Method We use the freelance marketplace Upwork to recruit online participants.12 We require participants to have more than 90% job success on Upwork, English fluency, and US residency. Participants are randomly assigned to one of the four conditions and we recruited ten participants per condition. Participants completed a demographic questionnaire, viewed a video of task instructions, and then interacted with the system and labeled documents until satisfied with the labels or forty minutes had elapsed.13 The session ended with a survey, where participants rated mental, physical, and temporal demand, and performance, effort, and frustration on 20-point scales, using questions adapted from the NASA Task Load Index (Hart and Staveland, 1988, TLX). The survey also included 7-point scales for ease of coming up with labels, usefulness and satisfaction with the system, and—for TR and 12http://Upwork.com 13Forty minutes of activity, excluding system time to classify and update documents. Participants nearly exhausted the time: 39.3 average minutes in TA, 38.8 in TR, 40.0 in LA, and 35.9 in LR. F p Overview Selection Overview Selection final purity 81.03 7.18 < .001 .011 final RI 39.89 6.28 < .001 .017 final NMI 70.92 9.87 < .001 .003 df(1,36) for all reported results Table 2: Results from 2 × 2 ANOVA with ART analyses on the final purity, RI, and NMI metrics. Only main effects for the factors of overview and selection are shown; no interaction effects were statistically significant. Topics and active learning both had significant effects on quality scores. TA—topic information helpfulness. Each participant was paid fifteen dollars.14 For statistical analysis, we primarily use 2 × 2 (overview × selection) ANOVAs with Aligned Rank Transform (Wobbrock et al., 2011, ART), which is a non-parametric alternative to a standard ANOVA that is appropriate when data are not expected to meet the normality assumption of ANOVA. 6.2 Document Cluster Evaluation We analyze the data by dividing the forty-minute labeling task into five minute intervals. If a participant stops before the time limit, we consider their final dataset to stay the same for any remaining intervals. Figure 5 shows the measures across study conditions, with similar trends for all three measures. Topic model overview and active learning both significantly improve final dataset measures. The topic overview and active selection conditions significantly outperform the list overview and random selection, respectively, on the final label quality metrics. Table 2 shows the results of separate 2 × 2 ANOVAs with ART with each of final purity, RI, and NMI scores. There are significant main effects of overview and selection on all three metrics; no interaction effects were significant. TR outperforms LA. Topic models by themselves outperform traditional active learning strategies (Figure 5). LA performs better than LR; while active learning was useful, it was not as useful as the topic model overview (TR and TA). LA provides an initial benefit. Average purity, NMI and RI were highest with LA for the earliest labeling time intervals. Thus, when time is very 14User study data available at http://github.com/ Pinafore/publications/tree/master/2016_ acl_doclabel/data/user_exp 1164 M ± SD [median] purity RI NMI TA 0.31 ± 0.08 [0.32] 0.80 ± 0.05 [0.80] 0.19 ± 0.08 [0.21] TR 0.32 ± 0.09 [0.31] 0.82 ± 0.04 [0.82] 0.21 ± 0.09 [0.20] LA 0.35 ± 0.05 [0.35] 0.82 ± 0.04 [0.81] 0.27 ± 0.05 [0.28] LR 0.31 ± 0.04 [0.31] 0.79 ± 0.04 [0.79] 0.19 ± 0.03 [0.19] Table 3: Mean, standard deviation, and median purity, RI, and NMI after ten minutes. NMI in particular shows the benefit of LA over other conditions at early time intervals. limited, using traditional active learning (LA) is preferable to topic overviews; users need time to explore the topics and a subset of documents within them. Table 3 shows the metrics after ten minutes. Separate 2 × 2 ANOVAs with ART on the means of purity, NMI and RI revealed a significant interaction effect between overview and selection on mean NMI (F(1, 36) = 5.58, p = .024), confirming the early performance trends seen in Figure 5 at least for NMI. No other main or interaction effects were significant, likely due to low statistical power. Subjective ratings. Table 4 shows the average scores given for the six NASA-TLX questions in different conditions. Separate 2 × 2 ANOVA with ART for each of the measures revealed only one significant result: participants who used the topic model overview find the task to be significantly less frustrating (M = 4.2 and median = 2) than those who used the list overview (M = 7.3 and median = 6.5) on a scale from 1 (low frustration) to 20 (high frustration) (F(1, 36) = 4.43, p = .042), confirming that the topic overview helps users organize their thoughts and experience less stress during labeling. Participants in the TA and TR conditions rate topic information to be useful in completing the task (M = 5.0 and median = 5) on a scale from 1 (not useful at all) to 7 (very useful). Overall, users are positive about their experience with the system. Participants in all conditions rate overall satisfaction with the interface positively (M = 5.8 and median = 6) on a scale from 1 (not satisfied at all) to 7 (very satisfied). Discussion. One can argue that using topic overviews for labeling could have a negative effect: users may ignore the document content and focus on topics for labeling. We tried to avoid this issue by making it clear in the instructions that they need to focus on document content and use topics as a guidance. On average, the participants in TR create 1.96 labels per topic and the participants in TA created 2.26 labels per topic. This suggests that participants are going beyond what they see in topics for labeling, at least in the TA condition. 6.3 Label Evaluation Results Section 6.2 compares clusters of documents in different conditions against the gold clusters but does not evaluate the quality of the labels themselves. Since one of the main contributions of ALTO is to accelerate inducing a high quality label set, we use crowdsourcing to assess how the final induced label sets compare in different conditions. For completeness, we also compare labels against a fully automatic labeling method (Aletras and Stevenson, 2014) that does not require human intervention. We assign automatic labels to documents based on their most prominent topic. We ask users on a crowdsourcing platform to vote for the “best” and “worst” label that describes the content of a US congressional bill (we use Crowdflower restricted to US contributors). Five users label each document and we use the aggregated results generated by Crowdflower. The user gets $0.20 for each task. We randomly choose 200 documents from our dataset (Section 4.1). For each chosen document, we randomly choose a participant from all four conditions (TA, TR, LA, LR). The labels assigned in different conditions and the automatic label of the document’s prominent topic construct the candidate labels for the document.15 Identical labels are merged into one label to avoid showing duplicate labels to users. If a merged label gets a “best” or “worst” vote, we split that vote across all the identical instances.16 Figure 6 shows the average number of “best” and “worst” votes for each condition and the automatic method. ALTO (TA) receives the most “best” votes and the fewest “worst” votes. LR receives the most worst votes. The automatic labels, interestingly, appear to do at least as well as the list view labels, with a similar number of best votes and fewer worst votes. This indicates that automatic labels have reasonable quality compared to at least some manually generated labels. However, when users are provided with a topic model overview— 15Some participants had typos in the labels. We corrected all the typos using pyEnchant (http://pythonhosted. org/pyenchant/ ) spellchecker. If the corrected label was still wrong, we corrected it manually. 16Evaluation data available at http://github.com/ Pinafore/publications/tree/master/2016_ acl_doclabel/data/label_eval 1165 M ± SD [median] Condition Mental Demand Physical Demand Temporal Demand Performance Effort Frustration TA 9.8 ± 5.6 [10] 2.9 ± 3.4 [2] 9 ± 7.8 [7] 5.5 ± 5.8 [1.5] 9.4 ± 6.3 [10] 4.5 ± 5.5 [1.5] TR 10.6 ± 4.5 [11] 2.4 ± 2.8 [1] 7.4 ± 4.1 [9] 8.8 ± 6.1 [7.5] 9.8 ± 3.7 [10] 3.9 ± 3.0 [3.5] LA 9.1 ± 5.5 [10] 1.7 ± 1.3 [1] 10.2 ± 4.8 [11] 8.6 ± 5.3 [10] 10.7 ± 6.2 [12.5] 6.7 ± 5.1 [5.5] LR 9.8 ± 6.1 [10] 3.3 ± 2.9 [2] 9.3 ± 5.7 [10] 9.4 ± 5.6 [10] 9.4 ± 6.2 [10] 7.9 ± 5.4 [8] Table 4: Mean, standard deviation, and median results from NASA-TLX post-survey. All questions are scaled 1 (low)–20 (high), except performance, which is scaled 1 (good)–20 (poor). Users found topic model overview conditions, TR and TA, to be significantly less frustrating than the list overview conditions. 0 20 40 60 LA LR TA TR auto Condition Number of Votes (mean) best worst Figure 6: Best and worst votes for document labels. Error bars are standard error from bootstrap sample. ALTO (TA) gets the most best votes and the fewest worst votes. with or without active learning selection—they can generate label sets that improve upon automatic labels and labels assigned without the topic model overview. 7 Related Work Text classification—a ubiquitous machine learning tool for automatically labeling text (Zhang, 2010)— is a well-trodden area of NLP research. The difficulty is often creating the training data (Hwa, 2004; Osborne and Baldridge, 2004); coding theory is an entire subfield of social science devoted to creating, formulating, and applying labels to text data (Saldana, 2012; Musialek et al., 2016). Crowdsourcing (Snow et al., 2008) and active learning (Settles, 2012), can decrease the cost of annotation but only after a label set exists. ALTO’s corpus overviews aid text understanding, building on traditional interfaces for gaining both local and global information (Hearst and Pedersen, 1996). More elaborate interfaces (Eisenstein et al., 2012; Chaney and Blei, 2012; Roberts et al., 2014) provide richer information given a fixed topic model. Alternatively, because topic models are imperfect (Boyd-Graber et al., 2014), refining underlying topic models may also improve users’ understanding of a corpus (Choo et al., 2013; Hoque and Carenini, 2015). Summarizing document collections through discovered topics can happen through raw topics labeled manually by users (Talley et al., 2011), automatically (Lau et al., 2011), or by learning a mapping from labels to topics (Ramage et al., 2009). When there is not a direct correspondence between topics and labels, classifiers learn a mapping (Blei and McAuliffe, 2007; Zhu et al., 2009; Nguyen et al., 2015). Because we want topics to be consistent between users, we use a classifier with static topics in ALTO. Combining our interface with dynamic topics could improve overall labeling, perhaps at the cost of introducing confusion as topics change during the labeling process. 8 Conclusion and Future Work We introduce ALTO, an interactive framework that combines both active learning selections with topic model overviews to both help users induce a label set and assign labels to documents. We show that users can more effectively and efficiently induce a label set and create training data using ALTO in comparison with other conditions, which lack either topic overview or active selection. We can further improve ALTO to help users gain better and faster understanding of text corpora. Our current system limits users to view only 20K documents at a time and allows for one label assignment per document. Moreover, the topics are static and do not adapt to better reflect users’ labels. Users should have better support for browsing documents and assigning multiple labels. Finally, with slight changes to what the system considers a document, we believe ALTO can be extended to NLP applications other than classification, such as named entity recognition or semantic role labeling, to reduce the annotation effort. 1166 Acknowledgments We thank the anonymous reviewers, David Mimno, Edward Scott Adler, Philip Resnik, and Burr Settles for their insightful comments. We also thank Nikolaos Aletras for providing the automatic topic labeling code. Boyd-Graber and Poursabzi-Sangdeh’s contribution is supported by NSF Grant NCSE1422492; Findlater, Seppi, and Boyd-Graber’s contribution is supported by collaborative NSF Grant IIS-1409287 (UMD) and IIS-1409739 (BYU). Any opinions, findings, results, or recommendations expressed here are of the authors and do not necessarily reflect the view of the sponsor. References E Scott Adler and John Wilkerson. 2006. Congressional bills project. NSF, 880066:00880061. Nikolaos Aletras and Mark Stevenson. 2014. Labelling topics using unsupervised graph-based methods. In Proceedings of the Association for Computational Linguistics, pages 631–636. Pranav Anand, Joseph King, Jordan L Boyd-Graber, Earl Wagner, Craig H Martell, Douglas W Oard, and Philip Resnik. 2011. Believe me-we can do this! annotating persuasive acts in blog text. In Computational Models of Natural Argument. David M. Blei and Jon D. McAuliffe. 2007. Supervised topic models. In Proceedings of Advances in Neural Information Processing Systems. David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3. Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In The Biennial GSCL Conference, pages 31–40. Jordan Boyd-Graber, David Mimno, and David Newman. 2014. Care and feeding of topic models: Problems, diagnostics, and improvements. Handbook of Mixed Membership Models and Their Applications; CRC Press: Boca Raton, FL, USA. Ian Budge. 2001. Mapping policy preferences: estimates for parties, electors, and governments, 19451998, volume 1. Oxford University Press. Bob Carpenter. 2008. Lingpipe 4.1.0. http:// alias-i.com/lingpipe. Allison Chaney and David Blei. 2012. Visualizing topic models. In International AAAI Conference on Weblogs and Social Media. Jaegul Choo, Changhyun Lee, Chandan K. Reddy, and Haesun Park. 2013. UTOPIAN: User-driven topic modeling based on interactive nonnegative matrix factorization. IEEE Transactions on Visualization and Computer Graphics, 19(12):1992–2001. Sanjoy Dasgupta and Daniel Hsu. 2008. Hierarchical sampling for active learning. In Proceedings of the International Conference of Machine Learning. Jacob Eisenstein, Duen Horng Chau, Aniket Kittur, and Eric Xing. 2012. TopicViz: interactive topic exploration in document collections. In International Conference on Human Factors in Computing Systems. Sandra G Hart and Lowell E Staveland. 1988. Development of nasa-tlx (task load index): Results of empirical and theoretical research. Advances in psychology, 52:139–183. M.A. Hearst and J.O. Pedersen. 1996. Reexamining the cluster hypothesis: scatter/gather on retrieval results. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. Enamul Hoque and Giuseppe Carenini. 2015. Convisit: Interactive topic modeling for exploring asynchronous online conversations. In Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI ’15. Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2014. Interactive topic modeling. Machine learning, 95(3):423–469. Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of classification, 2(1):193– 218. Rebecca Hwa. 2004. Sample selection for statistical parsing. Computational linguistics, 30(3):253–276. Mohit Iyyer, Peter Enns, Jordan L Boyd-Graber, and Philip Resnik. 2014. Political ideology detection using recursive neural networks. In Proceedings of the Association for Computational Linguistics. Edward F Kelly and Philip J Stone. 1975. Computer recognition of English word senses, volume 13. North-Holland. Soo-Min Kim and Eduard Hovy. 2004. Determining the sentiment of opinions. In Proceedings of the Association for Computational Linguistics, page 1367. Association for Computational Linguistics. Hans-Dieter Klingemann, Andrea Volkens, Judith Bara, Ian Budge, Michael D McDonald, et al. 2006. Mapping policy preferences II: estimates for parties, electors, and governments in Eastern Europe, European Union, and OECD 1990-2003. Oxford University Press Oxford. Ken Lang. 2007. 20 newsgroups data set. http://www.ai.mit.edu/people/jrennie/20Newsgroups/. 1167 Jey Han Lau, Karl Grieser, David Newman, and Timothy Baldwin. 2011. Automatic labelling of topic models. In Proceedings of the Association for Computational Linguistics, pages 1536–1545. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the European Chapter of the Association for Computational Linguistics. David D Lewis and William A Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 3–12. Springer-Verlag New York, Inc. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://www.cs.umass.edu/ mccallum/mallet. Marina Meil˘a. 2003. Comparing clusterings by the variation of information. In Learning theory and kernel machines, pages 173–187. Springer. Chris Musialek, Philip Resnik, and S. Andrew Stavisky. 2016. Using text analytic techniques to create efficiencies in analyzing qualitative data: A comparison between traditional content analysis and a topic modeling approach. In American Association for Public Opinion Research. Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics. Thang Nguyen, Jordan Boyd-Graber, Jeff Lund, Kevin Seppi, and Eric Ringger. 2015. Is your anchor going up or down? Fast and accurate supervised topic models. In Conference of the North American Chapter of the Association for Computational Linguistics. Sonya Nikolova, Jordan Boyd-Graber, and Christiane Fellbaum. 2011. Collecting semantic similarity ratings to connect concepts in assistive communication tools. In Modeling, Learning, and Processing of Text Technological Data Structures, pages 81–93. Springer. Miles Osborne and Jason Baldridge. 2004. Ensemblebased active learning for parse selection. In Conference of the North American Chapter of the Association for Computational Linguistics, pages 89–96. Citeseer. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher Manning. 2009. Labeled LDA: A supervised topic model for credit attribution in multilabeled corpora. In Proceedings of Empirical Methods in Natural Language Processing. William M Rand. 1971. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336):846–850. Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G Rand. 2014. Structural topic models for open-ended survey responses. American Journal of Political Science, 58(4):1064–1082. J. Saldana. 2012. The Coding Manual for Qualitative Researchers. SAGE Publications. Burr Settles. 2012. Active learning (synthesis lectures on artificial intelligence and machine learning). Long Island, NY: Morgan & Clay Pool. H Sebastian Seung, Manfred Opper, and Haim Sompolinsky. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, pages 287–294. ACM. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of Empirical Methods in Natural Language Processing. Alexander Strehl and Joydeep Ghosh. 2003. Cluster ensembles—a knowledge reuse framework for combining multiple partitions. The Journal of Machine Learning Research, 3:583–617. Edmund M. Talley, David Newman, David Mimno, Bruce W. Herr, Hanna M. Wallach, Gully A. P. C. Burns, A. G. Miriam Leenders, and Andrew McCallum. 2011. Database of NIH grants using machinelearned categories and graphical clustering. Nature Methods, 8(6):443–444, May. Katrin Tomanek, Joachim Wermter, and Udo Hahn. 2007. An approach to text corpus construction which cuts annotation costs and maintains reusability of annotated data. In Proceedings of Empirical Methods in Natural Language Processing, pages 486–495. Paul MB Vit´anyi, Frank J Balbach, Rudi L Cilibrasi, and Ming Li. 2009. Normalized information distance. In Information theory and statistical learning, pages 45–82. Springer. Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 143–146. ACM. Tong Zhang. 2010. Fundamental statistical techniques. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing. Chapman & Hall/CRC, 2nd edition. Ying Zhao and George Karypis. 2001. Criterion functions for document clustering: Experiments and analysis. Technical report, University of Minnesota. 1168 Jun Zhu, Amr Ahmed, and Eric P. Xing. 2009. MedLDA: maximum margin supervised topic models for regression and classification. In Proceedings of the International Conference of Machine Learning. 1169
2016
110
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1170–1180, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Predicting the Rise and Fall of Scientific Topics from Trends in their Rhetorical Framing Vinodkumar Prabhakaran Stanford University [email protected] William L. Hamilton Stanford University [email protected] Dan McFarland Stanford University [email protected] Dan Jurafsky Stanford University [email protected] Abstract Computationally modeling the evolution of science by tracking how scientific topics rise and fall over time has important implications for research funding and public policy. However, little is known about the mechanisms underlying topic growth and decline. We investigate the role of rhetorical framing: whether the rhetorical role or function that authors ascribe to topics (as methods, as goals, as results, etc.) relates to the historical trajectory of the topics. We train topic models and a rhetorical function classifier to map topic models onto their rhetorical roles in 2.4 million abstracts from the Web of Science from 1991-2010. We find that a topic’s rhetorical function is highly predictive of its eventual growth or decline. For example, topics that are rhetorically described as results tend to be in decline, while topics that function as methods tend to be in early phases of growth. 1 Introduction One of the most compelling research questions in the computational analysis of scientific literature is whether the vast collections of scientific text hold important clues about the dynamics involved in the evolution of science; clues that may help predict the rise and fall of scientific ideas, methods and even fields. Being able to predict scientific trends in advance could potentially revolutionize the way science is done, for instance, by enabling funding agencies to optimize allocation of resources towards promising research areas. Figure 1: Example abstract snippet. The abstract rhetorically frames the stem cells topic as the OBJECTIVE of the research, while the animal models topic functions as the research METHOD. Prior studies have often tracked scientific trends by applying topic modeling (Blei et al., 2003) based techniques to large corpora of scientific texts (Griffiths and Steyvers, 2004; Blei and Lafferty, 2006; Hall et al., 2008). They capture scientific ideas, methods, and fields in terms of topics, modeled as distributions over collection of words. These approaches usually adopt a decontextualized view of text and its usage, associating topics to documents based solely on word occurrences, disregarding where or how the words were employed. In reality, however, scientific abstracts often follow narrative structures (Crookes, 1986; Latour, 1987) that signal the specific rhetorical roles that different topics play within the research (Figure 1). The rhetorical role of a topic is the purpose or role it plays in the paper: as its background (scientific context), its objective/goal, the data employed, the design or method used (mode of inference), the results (what is found) or the conclusions (what they mean). 1170 RATIONALE: Neonatal ibotenic acid lesion of the ventral hippocampus was proposed as a relevant animal model of schizophrenia reflecting positive as well as negative symptoms of this disease. Before and after reaching maturity, specific alterations in the animals’ social behaviour were found. OBJECTIVE: In this study, social behaviour of ventral hippocampal lesioned rats was analysed. For comparison, rats lesioned either in the ventral hippocampus or the dorsal hippocampus at the age of 8 weeks were tested. METHODS: Rats on day 7 of age were lesioned with ibotenic acid in the ventral hippocampus and social behaviour was tested at the age of 13 weeks. For comparison, adult 8-week-old rats were lesioned either in the ventral or the dorsal hippocampus. Their social behaviour was tested at the age of 18 weeks. RESULTS: It was found that neonatal lesion resulted in significantly decreased time spent in social interaction and an enhanced level of aggressive behaviour. This shift is not due to anxiety because we could not find differences between control rats and lesioned rats in the elevated plus-maze. Lesion in the ventral and dorsal hippocampus, respectively, in 8-week-old rats did not affect social behaviour. CONCLUSIONS: The results of our study indicate that ibotenic acid-induced hippocampal damage per se is not related to the shift in social behaviour. We favour the hypothesis that these changes are due to lesion-induced impairments in neurodevelopmental processes at an early stage of ontogenesis. Figure 2: An example of a self-annotated abstract. Source: http://www.ncbi.nlm.nih.gov/pubmed/10435405. Rhetorical functions that topics take part in could hold important clues about the stage or development of an intellectual movement they stand to represent. For example, a topic that shifts over time from being employed as a method to being mentioned as background may signal an increase in its maturity and perhaps a corresponding decrease in its popularity among new research. In this paper, we introduce a new algorithm to determine the rhetorical functions of topics associated with an abstract. There is much work on annotating and automatically parsing the rhetorical functions or narrative structure of scientific writing (e.g., Teufel, 2000; Chung, 2009; Gupta and Manning, 2011; de Waard and Maat, 2012). We derive insights from this prior work, but since we desire to apply our analysis to a broad range of domains, we build our narrative structure model based on over 83,000 self-labeled abstracts extracted from a variety of domains in the Web of Science corpus. Figure 2 shows an example of an abstract in which the authors have labeled the different narrative sections explicitly and identified the rhetorical functions. We use our narrative structure model to assign rhetorical function labels to scientific topics and show that these labels offer important clues indicating whether topics will eventually grow or decline. Contributions: The three main contributions of our paper are: 1) we introduce the notion of the rhetorical scholarly functions of scientific topics, extending previous work which tended to focus on the rhetorical functions of individual sentences. We present an algorithm to assign rhetorical function labels to a topic as used in an individual paper; 2) we derive a new narrative scheme for scientific abstracts from over 83,000 abstracts that are labeled with narrative structures by their authors themselves, and present a tagger trained on this data that can parse unseen abstracts with 87% accuracy; 3) we show that the rhetorical function distribution of a topic reflects its temporal trajectory, and that it is predictive of whether the topic will eventually rise or fall in popularity. 2 Related Work Our work builds upon a wealth of previous literature in both topic modeling and scientific discourse analysis, which we discuss in this section. We also discuss how our work relates to prior work on analyzing scientific trends. 2.1 Topic Modeling Topic modeling has a long history of applications to scientific literature, including studies of temporal scientific trends (Griffiths and Steyvers, 2004; Steyvers et al., 2004; Wang and McCallum, 2006), article recommendation (Wang and Blei, 2011), and impact prediction (Yogatama et al., 2011). For example, Hall et al. (2008) and Anderson et al. (2012) show how tracking topic popularities over time can produce a ‘computational history’ of a particular scientific field (in their case ACL, where they tracked the rise of statistical NLP, among other dramatic changes). Technical advancements in these areas usually correspond to modifications or extensions of the topic modeling (i.e., LDA) framework itself, such as by incorporating citation (Nallapati et al., 2008) or co-authorship information (Mei et al., 2008) directly into the topic model; Nallapati et al. (2011) employ such an extension to estimate the temporal “lead” or “lag” of different scientific information outlets. We contribute to this line of work by showing how we can build off of the standard LDA 1171 framework—by overlaying rhetorical roles—and how this allows us to not only detect the growth and decline of scientific topics but also to predict these trends based upon the rhetorical roles being employed. Since our framework is structured as a pipeline (Figure 3) and works with the output of a topic modeling system, it is compatible with the vast majority of these extended topic models. 2.2 Scientific Discourse Analysis Scientific discourse analysis is an active area of research with many different proposed schema of analysis — Argument Zones (Teufel, 2000), Information Structure (Guo et al., 2010), Core Scientific Concepts (Liakata, 2010), Research Aspects (Gupta and Manning, 2011), Discourse Segments (de Waard and Maat, 2012), Relation Structures (Tateisi et al., 2014), and Rhetorical Roles (Chung, 2009) to name a few. Most studies in this area focus on improving automatic discourse parsing of scientific text, while some works also focus on the linguistic patterns and psychological effects of scientific argumentation (e.g., de Waard and Maat, 2012). A wide range of techniques have been used in prior work to parse scientific abstracts, from fully supervised techniques (Chung, 2009; Guo et al., 2010) to semi-supervised (Guo et al., 2011c; Guo et al., 2013) and unsupervised techniques (Kiela et al., 2015). Scientific discourse parsing has also been applied to other downstream tasks within the biomedical domain, such as information retrieval from randomized controlled trials in evidence based medicine (Chung, 2009; Kim et al., 2011; Verbeke et al., 2012), cancer risk assessment (Guo et al., 2011b), summarization (Teufel and Moens, 2002; Contractor et al., 2012), and question answering (Guo et al., 2013). Our work also falls in this category in the sense that our goal is to apply the rhetorical function parser to better understand the link between rhetoric and the historical trajectory of scientific ideas. 2.3 Scientific Trends Analysis There is also a large body of literature in bibliometrics and scientometrics on tracking scientific trends using various citation patterns. Researchers have attempted to detect emerging research fronts using topological measures of citation networks (Shibata et al., 2008) as well as co-citation clusters (Small, 2006; Shibata et al., 2009). Unlike this line of work, our focus is not on citation patterns, but on how scientific trends are reflected in the texts of scientific publications. Prior studies have also analyzed text to detect scientific trends. Mane and B¨orner (2004) and Guo et al. (2011a) use word burst detection (Kleinberg, 2003) to map new and emerging scientific fields, while Small (2011) examined sentiments expressed in the text surrounding citations, showing uncertainty in interdisciplinary citations contrasted with utility in within-discipline citations. In contrast to this previous work, we analyze the rhetorical function of automatically extracted topics from abstract text, without access to the citation context in full text. 3 Corpus We use the Thomson Reuters Web of Science Core Collection, which contains scientific abstracts from over 8,500 of leading scientific and technical journals across 150 disciplines. We limit our study to the subset of abstracts from 1991 to 2010, which forms the majority of articles. This subset (denoted WOS hereafter) contains over 25 million articles from around 250 fields. 4 Rhetorical Functions of Scientific Topics We use the term rhetorical function to identify the purpose or role a scientific topic plays within a research paper. This function qualifies the association between a topic and a paper. A topic could represent the general domain of the research or its main objective/goal. It could also correspond to the data used, the way the research is designed, or the methods used. A topic may serve one or more of these roles within the same paper. The same topic may also serve different roles in different papers. We are interested in finding the different rhetorical functions by which topics are associated with the papers in our corpus, as a tool for understanding the growth and decline of topics over time. Our focus is thus on the rhetorical functions that topics play across papers, in order to understand the ‘rhetorical structure of science’ (Latour, 1987), although these are cued by specific rhetorical structures in individual sentences of individual papers. (Our work on the function of topics thus differs somewhat from previous research focusing on the rhetorical role of individual sentences or segments in the structure of the paper.) 1172 Topic model Abstract Parser Topic A: 55% Topic B: 20% Topic C: 15% … Topic A Method Topic B Background Topic C Objective … … Abstract: A Topic Distribution of A Abstract Parse of A Rhetorical Function Assignments Rhetorical Function Labeling Downstream Analysis Figure 3: Rhetorical Function Labeling The topic model (step a) assigns topic distributions to the abstract text (bottom left) and the abstract parser (step b) divides the text into discourse segments (top right). Rhetorical function labeling (step c) combines these two analyses to assign rhetorical functions to topics (bottom middle). These labels enrich the analysis of trends in topic popularity over decades (bottom right). We follow a three-step process to assign rhetorical function labels to a large corpus of scientific abstracts. Figure 3 presents the pictorial representation of the procedure we follow — 1) obtain topic probabilities for each abstract, 2) parse the narrative structure of the abstracts in order to arrive at segments of text with different discourse intentions, and 3) superimpose the topic assignments and the abstract parse to arrive at the rhetorical function labels that capture how the topics are associated with the work presented in the abstract. We describe each of these steps in detail below. a. Topic modeling: The core of our approach relies on the popular latent Dirichlet allocation (LDA) (Blei et al., 2003) algorithm. It probabilistically assigns both words and abstracts to different topics, in an unsupervised manner. Let A = {a1, a2, ...a|A|} be the set of abstracts within the field we are studying, and let T = {t1, t2, ...t|T|} be the set of different topics in the field. A topic model trained on A assigns θa,t, the probability of topic t occurring in abstract a for all a ∈A and t ∈T. The topic model also provides the φt,w, the probability of word w occurring in topic t. b. Abstract parsing: An abstract parser divides the abstract text into a sequence of discourse segments, with each segment assigned a specific label denoting its rhetorical purpose. Let S(a) = (s1, s2, ...s|S(a)|) be the sequence of segments identified by the abstract parser, and let L denote the set of labels in the abstract parsing framework. The abstract parser assigns a label l(si) ∈L, for each si ∈S(a). c. Rhetorical Function Labeling: We tease apart the abstract-level topic distribution θa,t assigned by the topic model (step a) along the segments found by the abstract parser (step b), and find the topic weights on each label θl,t(a) by calculating topic weights for segments that are assigned label l, i.e., {si ∈S(a) : l(si) = l}. We calculate the topic weights for each segment by aggregating the topic weights on each word derived from φt,w inferred by the topic model: θl,t(a) ∝ X wi∈si:l(si)=l φt,w (1) We first describe the abstract parsing system (step b) we built for this purpose in Section 5, before discussing the execution details of each of the above three steps in Section 6. 5 Abstract Parsing Datasets with manual annotations for discourse structure of abstracts (e.g., Guo et al., 2010; Gupta and Manning, 2011) are few, small, and limited to specific domains. It is not clear how accurate an abstract parser trained on these datasets will perform on other domains. Since we want to obtain the structure of abstracts in a broad range of domains over different time-periods, a parser trained 1173 on small datasets in specific domains may not be adequate for our purposes. Hence, we exploit the large number of abstracts in the WOS corpus in order to gather a dataset of self-labeled abstracts from a wide range of domains, over a period of two decades. By self-labeled abstracts, we refer to the abstracts where the authors have identified the discourse segments using explicit section labels. 5.1 Extracting Self-labeled Abstracts In the first step, we extract all the patterns from the WOS corpus that could potentially be a segment label. For this, we look for a pattern that is commonly used by authors to label abstract segments — a capitalized phrase of one or more words occurring at the beginning of the abstract or preceded by a period, and followed by a “:”. We obtained 455,972 matches for the above pattern, corresponding to 2,074 unique labels, majority of which were valid discourse segment labels. These include variations of the same labels (e.g., “OBJECTIVE” and “AIM”, “CONCLUSION” and “CONCLUSIONS” etc.) and typos (e.g., “RESLUTS”). There were also instances where two common labels were combined (e.g., “DATA AND METHODS”). The extracted matches also contained a long tail of false positives (e.g., “BMI”). One of the challenges in using the set of abstracts we obtained above is that they do not follow a common labeling scheme. Hence, we manually analyzed the top 100 unique labels (which corresponds to labels with more than ∼50 instances) and mapped them into a unified labeling scheme, grouping together labels with similar intentions. This resulted in a typology of seven labels: • BACKGROUND: The scientific context • OBJECTIVE: The specific goal(s) • DATA: The empirical dimension used • DESIGN: The experimental setup • METHOD: Means used to achieve the goal • RESULT: What was found • CONCLUSION: What was inferred We use this mapping to obtain abstracts that are self-labeled. We exclude the abstracts that had combined labels, since they may add noise to the training data. We also exclude abstracts that contained only false positive matches. This preprocessing resulted in a dataset of 83,559 abstracts. We refer to this dataset as SL, hereafter. We divide the SL dataset into Train/Dev/Test subsets for our experiments (Table 1). Train Dev Test # of abstracts 58,600 12,331 12,628 # of labeled segments 243,217 51,111 52,403 # of sentences 681,730 143,792 147,321 Table 1: Statistics of self-labeled abstracts 5.2 Automatic tagging of abstracts We use the SL dataset to build a supervised learning system that can predict the abstract structure in unseen documents. We perform the prediction at the sentence level. We used the CRF algorithm to train the model, as it has been proven successful in similar tasks in prior work (Hirohata et al., 2008; Merity et al., 2009). In our preliminary experiments, we also tried using SVM, but CRF was faster and outperformed SVM by 4-5% points consistently. We use the following features: 1. Location: location of the sentence from the beginning or end of the abstract 2. Word ngrams: unigrams and bigrams of word lemmas 3. Part-of-speech ngrams: unigrams and bigrams of part-of-speech tags 4. Verb lemmas and part-of-speeches: lemmas of verbs, and their part-of-speech tags in order to identify the tense of verb usage 5. Verb classes: we looked up each verb in the VerbNet (Kipper-Schuler, 2005) index and added the VerbNet class to the feature set if the verb maps to a unique verb class. 6. Concreteness rating: we used the max, min, and mean concreteness ratings of words based on (Brysbaert et al., 2014). Most of these features are commonly used in similar tasks, while the concreteness features are new (and significantly improved performance). We do not use parse features, however, since our pipeline emphasizes computational efficiency, and parse features showed minimal utility relative to their computational cost in prior work (Guo et al., 2010). We evaluate the performance of our learned model in terms of overall accuracy as well as perclass precision, recall and F-measure of predicting the segment labels at the sentence level. We performed experiments on the Dev set to choose the best feature configuration (e.g., tuning for word and part-of-speech ngram length). Each feature set described in the previous paragraph were con1174 Precision Recall F-measure BACKGROUND 74.6 77.2 75.8 OBJECTIVE 85.2 81.8 83.5 DATA 82.6 76.8 79.6 DESIGN 68.0 64.8 66.3 METHOD 80.4 80.1 80.2 RESULT 90.8 93.3 92.0 CONCLUSION 93.8 92.0 92.9 Accuracy 86.6 Table 2: Results of parsing abstract structure tributing features to the best performance obtained on the Dev set. The concreteness features we introduced significantly improved the overall accuracy by around 2%. Table 2 shows the results obtained on testing the final classifier system on the Test subset of SL.1 We obtain an overall high accuracy of 86.6% at the sentence level. While RESULT and CONCLUSION obtained F-measures above 90%, OBJECTIVE and METHOD reported reasonable Fmeasures above 80%. DESIGN obtained the lowest precision, recall and F-measure. Overall, the performance we obtain is in the range of other reported results in similar tasks (Guo et al., 2013). 6 Analysis Setup In the rest of this paper, we apply the rhetorical function labeling system described in Section 4 to analyze the growth and decline of scientific topics. We chose four diverse fields from the WOS corpus with large numbers of abstracts for our analysis, which are: Biochemistry & Molecular Biology (BIO): 850,394 abstracts, Applied Physics (PHY): 558,934 abstracts, Physical Chemistry (CHM): 533,871 abstracts, and Neurosciences (NEU): 477,197 abstracts. We apply the steps (a), (b), and (c) of rhetorical function labeling as described in Section 4 to these fields as follows: Topic Modeling: We use the LightLDA implementation (Yuan et al., 2015) of LDA. It employs a highly efficient and parallelized MetropolisHastings sampler that allows us to scale our ap1We report only the results obtained in unseen abstracts in the Test set due to lack of space. Similar performance was obtained in the Dev set as well. proach to massive datasets (e.g., millions of abstracts in our case). For all our experiments, we ran the algorithm for 1000 iterations, as this was sufficient for convergence. We use 500 topics for all four fields, but otherwise use the default hyperparameter settings from the LightLDA package.2 Abstract Parsing We applied the 7-label abstract parsing system described in Section 5 on all abstracts in each of the four disciplines. Rhetorical Function Labeling Once the steps (a) and (b) are completed, we applied the rhetorical function labeling step (c) from Section 4 in order to obtain topic weights for each segment. In addition, we calculate a rhetorical function label distribution (referred to as label distribution hereafter) for each topic associated with an abstract. 7 Dissecting Topic Trajectories In this section, we investigate whether the label distribution of the topics (i.e., across the rhetorical function labels) sheds light on the kind of trajectories they follow; in particular, whether it can predict the up-trend vs. down-trend of topics. We formalize the problem as follows: given two sets of topics clustered based on their historical trajectories, do the label-distribution based features of a topic have predictive power to classify the topic to be in either set? 7.1 Tracking topic popularities For each field, we first calculated the number of articles in which each topic occurred in at least one of the rhetorical functions for each year in the 20-year period 1991-2010. Since the fields themselves grow over the years, we divide this number by the number of articles within that field to obtain the popularity of each topic in any year: popularity(t, y) = DocsWithTopic(t)/|Ay|, where Ay denotes the subset of articles in year y. 7.2 Detecting growing vs. declining topics We are interested in the set of topics in each field that are either growing or declining. Figure 4 shows example popularity trends for two such topics in neuroscience: stem cell research, which sees 2In preliminary experiments, we used 100, 500 and 1000 topics. Upon manual inspection, we found that 500 topics resulted in a granularity that better captures the scientific intellectual movements within fields. 1175 Stem cell research (“cell”, “stem”, “neural”) Opioid drug research (“morphine”, “effect”, “opioid”) Figure 4: Example topic popularity curves from 1991 till 2010. Stem cell research (green) sees an increase in popularity over this time period, while Opioid research (blue) declines in popularity. a dramatic increase in popularity from 1991-2010, and opioid3 drug research, which declines in popularity during the same time-period. Of course, topics do not always follow trajectories of pure growth or decline. A topic may have risen and subsequently fallen in popularity over the period of 20 years, or may stay more or less the same throughout (Griffiths and Steyvers, 2004). Hence, categorizing topics to be growing or declining solely based on the popularity at the beginning and end of the time period is problematic. We circumvent this issue and avoid manually-defined thresholds by discovering topical growth/decline curves in an unsupervised fashion. We use the K-Spectral Centroid (K-SC) algorithm (Yang and Leskovec, 2011), which groups different time-series into clusters, such that similar shapes get assigned to the same cluster, irrespective of scaling and translation (unlike other popular time series clustering algorithms such as Dynamic Time Warping). We run the clustering algorithm using K = 3, and choose the cluster with the upward trending centroid to be the set of growing topics and the cluster with the downward trending centroid to be the set of declining topics.4 Figure 5 shows example centroids from Physical Chemistry, which clearly exhibit decreasing, increasing, and non-changing trends. 3Opioids are a popular class of analgesic (i.e., painkilling) drugs, including morphine. 4We also performed experiments using K = 5, 10 and 15 and grouped the clusters that are growing vs. declining. We obtained similar results in those experiments as well. 1991 2000 2010 0.0 0.1 0.2 0.3 0.4 0.5 popularity Cluster 1 (# = 127) 1991 2000 2010 year Cluster 2 (# = 44) 1991 2000 2010 Cluster 3 (# = 329) Figure 5: Cluster centroids in Physical Chemistry Cluster 1: topics that declined in popularity; Cluster 2: topics that grew in popularity; Cluster 3: topics that stayed mostly the same. 7.3 Characterizing topic growth vs. decline We not only seek to detect growing vs. declining topics; our goal is to characterize these trajectories based upon the rhetorical functions that the topics are fulfilling during different points of their life-cycles. Figure 6 shows how the rhetorical function label distributions of the opioid and stem cell research topics shift from 1991 to 2010. The opioid research topic, which declines in popularity during this time-period, is frequently discussed in the RESULT and BACKGROUND roles during the early years. In contrast, the stem cell research topic, which is dramatically increasing in popularity, only begins to be discussed frequently in these roles towards the end of the time-period. Intuitively, these shifts make sense: topics become results-oriented and mentioned as background as they reach their peak; this peak is seen at the beginning of the time-period for opioid research, while stem cell research appears to be increasing towards a peak by the end of our data (i.e., 2010). These observations indicate that the rhetorical functions which a topic is fulfilling may be indicative of its current growth vs. decline. 7.4 Experiments We perform two sets of experiments to quantify the qualitative insights noted above, i.e. that a topic’s rhetorical function distribution is indicative of its eventual growth vs. decline. First, we show that a topic’s rhetorical function distribution over the entire time-period can be used to classify the topic as either growing or declining. Next, we show that using only the first five years of data, we can predict whether a topic will eventually grow or decline. We use two sets of features: • label-distribution-percents (LD-%): seven features corresponding to the percentage of topics across the seven rhetorical function labels (e.g., % of time the topic is a METHOD) 1176 1991 1995 2000 2005 2010 years 0.000 0.001 0.002 0.004 0.005 0.006 popularity Trajectory Decomposition BACKGROUND OBJECTIVE DATA DESIGN METHOD RESULT CONCL. (a) Stem cell topic. 1991 1995 2000 2005 2010 years 0.000 0.000 0.001 0.001 0.002 0.002 popularity Trajectory Decomposition (b) Opioid topic. Figure 6: Examples of topics with changing rhetorical functions over time. • label-distribution-changes (LD-∆): seven features that aggregate the mean change in this percentage over the years (e.g., is the % of being METHOD going up/down?). These features are fed into a standard L2regularized logistic regression classifier (with regularization parameter C = 10), which predicts whether a topic belongs to the growing or declining cluster.5 We use random prediction and majority prediction as two uninformed baselines. Classification task System ALL BIO PHY CHM NEU Random 50.3 47.2 47.8 50.9 51.2 Majority 56.1 56.3 81.6 74.3 56.6 LR 74.2 81.0 83.3 81.9 74.8 LR - LD-R 71.3 77.7 81.6 73.1 70.5 Table 3: Results on classifying trajectories LR: Logistic Regression using LD-%, LD-∆, and LD-R. LR - LD-R: Logistic Regression without using LD-R. For the task of classifying a topic as either growing or declining, we compute the LD-% and LD∆features separately over the first and last five years of the data. This allows the model to analyze how the rhetorical functions are differentially expressed at the start vs. the end of the period under consideration. We also add a feature labeldistribution-ratio (LD-R), which is the ratio of the end to beginning LD-% values, so that the model has access to relative increases/decreases in the label distributions over the entire time-period. 5Similar performance was obtained with a linear SVM Table 3 shows the performance of our model on this task. As expected a topic’s label distribution over its entire life-time is very informative with respect to classifying the topic as growing or declining. We achieve a significant improvement over the baselines on the full dataset (32.3% relative improvement over majority prediction), and this trend holds across each field separately. The ratio feature proved to be extremely predictive in this task, i.e. relative increases/decreases in a topic being used in different functional roles are very predictive of the type of its trajectory. Prediction task We now turn to the more difficult task of predicting whether a topic will grow or decline, given access only to the first five years of its label distribution. The setup here is identical to the classification task, except that we now have access only to the LD-∆and LD-% features aggregated over the first five years of data. System Accuracy on ALL LD-% + LD-∆ 72.1 LD-% only 71.0 LD-∆only 60.4 Table 4: Results on predicting trajectory Table 4 shows the performance of our model on this task. (The baseline performances are the same as in the classification task). These results show that we can accurately predict whether a topic will grow or decline using only a small amount of data. 1177 Label BKGRND. OBJ. DATA DESIGN METHOD RESULT CONC. LD-% Weight -1.21 -0.10 6.38 1.21 3.82 -8.65 1.67 LD-∆Weight 2.05 -0.01 2.20 -1.08 -1.63 -0.26 -1.27 Table 5: Logistic Regression feature weights for the prediction task on the full (ALL) dataset. Moreover, we see that both percentage and delta features are necessary for this task. 7.5 Analysis The feature weights of our learned linear model also provide insights into the dynamics of scientific trends. Table 5 contains the learned feature weights for the prediction task. Overall, these weights reinforce the conclusions drawn from the case study in Section 7.3. The strongest feature is the LD-% feature for the RESULT rhetorical function, which is highly negative. This indicates that topics that are currently being discussed as a result are likely at the peak of their popularity, and are thus likely to decrease in the future. Interestingly, the weights on the LD-% features for the methodological rhetorical functions (METHODS, DATA, and DESIGN) are all significantly positive. This suggests that topics occupying these functions may have reached a level of maturity where they are active areas of research and are being consumed by a large number of researchers, but that they have not yet peaked in their popularity. Finally, we see that the weights for the BACKGROUND and CONCLUSION roles have opposite trends: growing topics are more often mentioned as conclusions whereas dying topics, i.e. topics at the peak of their life-cycles, tend to be mentioned in background, or contextualizing, statements. 8 Conclusion We introduce a novel framework for assigning rhetorical functions to associations between scientific topics and papers, and we show how these rhetorical functions are predictive of a topic’s growth vs. decline. Our analysis reveals important regularities with respect to how a topic’s usage evolves over its lifecycle. We find that topics that are currently discussed as results tend to be in decline, whereas topics that are playing a methodological role tend to be in the early phases of growth. In some ways these results are counter-intuitive; for example, one might expect topics that are being discussed as results to be the focus of current cutting edge research, and that methodological topics might be more mundane and lacking in novelty. Instead our results suggest that results-oriented topics are at the peak of their life-cycle, while methodological topics still have room to grow. This result has important implications for research funding and public policy: the most promising topics—in terms of potential for future growth—are not those that are currently generating the most results, but are instead those that are active areas of methodological inquiry. Our analysis does suffer from some limitations. Examining only 20 years of scientific progress prevents us from analyzing drastic scientific changes, e.g. paradigm shifts, that are only obvious over longer time-scales (Kuhn, 2012). Access to longer time-spans—along with varying data sources such as grants and patents—would also allow us to more completely model the trajectory of a topic as it moves from being active area of research to potentially impacting commercial industries and economic development. Nonetheless, we hope this work offers another step towards using computational tools to better understand the ‘rhetorical structure of science’ (Latour, 1987). Acknowledgments This work was supported by the NSF Award IIS1159679, by the Stanford Data Science Initiative, and the Brown Institute for Media Innovation. W.H. was supported by the SAP Stanford Graduate Fellowship. We would also like to thank the ACL anonymous reviewers for their constructive feedback. References Ashton Anderson, Dan McFarland, and Dan Jurafsky. 2012. Towards a computational history of the ACL: 1980-2008. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 13–21. Association for Computational Linguistics. 1178 David M Blei and John D Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113– 120. ACM. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. The Journal of machine Learning research, 3:993–1022. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904–911. Grace Y. Chung. 2009. Sentence retrieval for abstracts of randomized controlled trials. BMC Medical Informatics and Decision Making, 9(1):1–13. Danish Contractor, Yufan Guo, and Anna Korhonen. 2012. Using argumentative zones for extractive summarization of scientific articles. In COLING, volume 12, pages 663–678. Citeseer. Graham Crookes. 1986. Towards a validated analysis of scientific text structure. Applied linguistics, 7(1):57–70. Anita de Waard and Henk Pander Maat. 2012. Verb form indicates discourse segment type in biological research papers: Experimental evidence. Journal of English for Academic Purposes, 11(4):357 – 366. Thomas L Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(suppl 1):5228–5235. Yufan Guo, Anna Korhonen, Maria Liakata, Ilona Silins Karolinska, Lin Sun, and Ulla Stenius. 2010. Identifying the information structure of scientific abstracts: an investigation of three different schemes. In Proceedings of the 2010 Workshop on Biomedical Natural Language Processing, pages 99–107. Association for Computational Linguistics. Hanning Guo, Scott Weingart, and Katy B¨orner. 2011a. Mixed-indicators model for identifying emerging research areas. Scientometrics, 89(1):421–435. Yufan Guo, Anna Korhonen, Maria Liakata, Ilona Silins, Johan Hogberg, and Ulla Stenius. 2011b. A comparison and user-based evaluation of models of textual information structure in the context of cancer risk assessment. BMC bioinformatics, 12(1):1. Yufan Guo, Anna Korhonen, and Thierry Poibeau. 2011c. A weakly-supervised approach to argumentative zoning of scientific documents. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 273– 283, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Yufan Guo, Ilona Silins, Ulla Stenius, and Anna Korhonen. 2013. Active learning-based information structure analysis of full scientific articles and two applications for biomedical literature review. Bioinformatics, 29(11):1440–1447. Sonal Gupta and Christopher Manning. 2011. Analyzing the dynamics of research by extracting key aspects of scientific papers. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1–9, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. David Hall, Daniel Jurafsky, and Christopher D. Manning. 2008. Studying the history of ideas using topic models. In Proceedings of the conference on empirical methods in natural language processing, pages 363–371. Association for Computational Linguistics. Kenji Hirohata, Naoaki Okazaki, Sophia Ananiadou, and Mitsuru Ishizuka. 2008. Identifying sections in scientific abstracts using conditional random fields. In In Proc. of the IJCNLP 2008. Douwe Kiela, Yufan Guo, Ulla Stenius, and Anna Korhonen. 2015. Unsupervised discovery of information structure in biomedical documents. Bioinformatics, 31(7):1084–1092. S Nam Kim, David Martinez, Lawrence Cavedon, and Lars Yencken. 2011. Automatic classification of sentences to support evidence based medicine. BMC bioinformatics, 12(2):1. Karin Kipper-Schuler. 2005. VerbNet: a broadcoverage, comprehensive verb lexicon. Ph.D. thesis, Computer and Information Science Department, Universiy of Pennsylvania, Philadelphia, PA. Jon Kleinberg. 2003. Bursty and hierarchical structure in streams. Data Mining and Knowledge Discovery, 7(4):373–397. Thomas S Kuhn. 2012. The structure of scientific revolutions. University of Chicago press. Bruno Latour. 1987. Science in action: How to follow scientists and engineers through society. Harvard university press. Maria Liakata. 2010. Zones of conceptualisation in scientific papers: a window to negative and speculative statements. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, pages 1–4, Uppsala, Sweden, July. University of Antwerp. Ketan K Mane and Katy B¨orner. 2004. Mapping topics and topic bursts in pnas. Proceedings of the National Academy of Sciences, 101(suppl 1):5287– 5290. Qiaozhu Mei, Deng Cai, Duo Zhang, and ChengXiang Zhai. 2008. Topic modeling with network regularization. In Proceedings of the 17th International Conference on World Wide Web, WWW ’08, pages 101–110, New York, NY, USA. ACM. 1179 Stephen Merity, Tara Murphy, and James R Curran. 2009. Accurate argumentative zoning with maximum entropy models. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries, pages 19–26. Association for Computational Linguistics. Ramesh M. Nallapati, Amr Ahmed, Eric P. Xing, and William W. Cohen. 2008. Joint latent topic models for text and citations. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, pages 542–550, New York, NY, USA. ACM. Ramesh Maruthi Nallapati, Xiaolin Shi, Daniel McFarland, Jure Leskovec, and Daniel Jurafsky. 2011. Leadlag lda: Estimating topic specific leads and lags of information outlets. In Fifth International AAAI Conference on Weblogs and Social Media. Naoki Shibata, Yuya Kajikawa, Yoshiyuki Takeda, and Katsumori Matsushima. 2008. Detecting emerging research fronts based on topological measures in citation networks of scientific publications. Technovation, 28(11):758–775. Naoki Shibata, Yuya Kajikawa, Yoshiyuki Takeda, and Katsumori Matsushima. 2009. Comparative study on methods of detecting research fronts using different types of citation. Journal of the American Society for Information Science and Technology, 60(3):571–580. Henry Small. 2006. Tracking and predicting growth areas in science. Scientometrics, 68(3):595–610. Henry Small. 2011. Interpreting maps of science using citation context sentiments: A preliminary investigation. Scientometrics, 87(2):373–388, May. Mark Steyvers, Padhraic Smyth, Michal Rosen-Zvi, and Thomas Griffiths. 2004. Probabilistic authortopic models for information discovery. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’04, pages 306–315, New York, NY, USA. ACM. Yuka Tateisi, Yo Shidahara, Yusuke Miyao, and Akiko Aizawa. 2014. Annotation of computer science papers for semantic relation extrac-tion. In LREC, pages 1423–1429. Simone Teufel and Marc Moens. 2002. Summarizing scientific articles: experiments with relevance and rhetorical status. Computational linguistics, 28(4):409–445. Simone Teufel. 2000. Argumentative zoning: information extraction from scientific text. Ph.D. thesis, University of Edinburgh. Mathias Verbeke, Vincent Van Asch, Roser Morante, Paolo Frasconi, Walter Daelemans, and Luc De Raedt. 2012. A statistical relational learning approach to identifying evidence based medicine categories. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 579–589. Association for Computational Linguistics. Chong Wang and David M. Blei. 2011. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, pages 448–456, New York, NY, USA. ACM. Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-markov continuous-time model of topical trends. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 424–433. ACM. Jaewon Yang and Jure Leskovec. 2011. Patterns of temporal variation in online media. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 177–186. ACM. Dani Yogatama, Michael Heilman, Brendan O’Connor, Chris Dyer, Bryan R Routledge, and Noah A Smith. 2011. Predicting a scientific community’s response to an article. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 594–604. Association for Computational Linguistics. Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric Po Xing, Tie-Yan Liu, and Wei-Ying Ma. 2015. LightLDA: Big Topic Models on Modest Computer Clusters. In Proceedings of the 24th International Conference on World Wide Web. 1180
2016
111
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1181–1191, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Compositional Sequence Labeling Models for Error Detection in Learner Writing Marek Rei The ALTA Institute Computer Laboratory University of Cambridge United Kingdom [email protected] Helen Yannakoudakis The ALTA Institute Computer Laboratory University of Cambridge United Kingdom [email protected] Abstract In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional architectures and propose a framework for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14 shared task dataset show the model is able to outperform other participants on detecting errors in learner writing. Finally, the model is integrated with a publicly deployed self-assessment system, leading to performance comparable to human annotators. 1 Introduction Automated systems for detecting errors in learner writing are valuable tools for second language learning and assessment. Most work in recent years has focussed on error correction, with error detection performance measured as a byproduct of the correction output (Ng et al., 2013; Ng et al., 2014). However, this assumes that systems are able to propose a correction for every detected error, and accurate systems for correction might not be optimal for detection. While closed-class errors such as incorrect prepositions and determiners can be modeled with a supervised classification approach, content-content word errors are the 3rd most frequent error type and pose a serious challenge to error correction frameworks (Leacock et al., 2014; Kochmar and Briscoe, 2014). Evaluation of error correction is also highly subjective and human annotators have rather low agreement on gold-standard corrections (Bryant and Ng, 2015). Therefore, we treat error detection in learner writing as an independent task and propose a system for labeling each token as being correct or incorrect in context. Common approaches to similar sequence labeling tasks involve learning weights or probabilities for context n-grams of varying sizes, or relying on previously extracted high-confidence context patterns. Both of these methods can suffer from data sparsity, as they treat words as independent units and miss out on potentially related patterns. In addition, they need to specify a fixed context size and are therefore often limited to using a small window near the target. Neural network models aim to address these weaknesses and have achieved success in various NLP tasks such as language modeling (Bengio et al., 2003) and speech recognition (Dahl et al., 2012). Recent developments in machine translation have also shown that text of varying length can be represented as a fixed-size vector using convolutional networks (Kalchbrenner and Blunsom, 2013; Cho et al., 2014a) or recurrent neural networks (Cho et al., 2014b; Bahdanau et al., 2015). In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional structures for constructing informative context representations. Based on the findings, we propose a novel framework for performing error detection in learner writing, which achieves state-of-the-art results on two datasets of errorannotated learner essays. The sequence labeling model creates a single variable-size network over the whole sentence, conditions each label on all the words, and predicts all labels together. The effects of different datasets on the overall performance are investigated by incrementally providing additional training data to the model. Finally, we integrate the error detection framework with a publicly deployed self-assessment system, leading 1181 to performance comparable to human annotators. 2 Background and Related Work The field of automatically detecting errors in learner text has a long and rich history. Most work has focussed on tackling specific types of errors, such as usage of incorrect prepositions (Tetreault and Chodorow, 2008; Chodorow et al., 2007), articles (Han et al., 2004; Han et al., 2006), verb forms (Lee and Seneff, 2008), and adjective-noun pairs (Kochmar and Briscoe, 2014). However, there has been limited work on more general error detection systems that could handle all types of errors in learner text. Chodorow and Leacock (2000) proposed a method based on mutual information and the chi-square statistic to detect sequences of part-of-speech tags and function words that are likely to be ungrammatical in English. Gamon (2011) used Maximum Entropy Markov Models with a range of features, such as POS tags, string features, and outputs from a constituency parser. The pilot Helping Our Own shared task (Dale and Kilgarriff, 2011) also evaluated grammatical error detection of a number of different error types, though most systems were error-type specific and the best approach was heavily skewed towards article and preposition errors (Rozovskaya et al., 2011). We extend this line of research, working towards general error detection systems, and investigate the use of neural compositional models on this task. The related area of grammatical error correction has also gained considerable momentum in the past years, with four recent shared tasks highlighting several emerging directions (Dale and Kilgarriff, 2011; Dale et al., 2012; Ng et al., 2013; Ng et al., 2014). The current state-of-the-art approaches can broadly be separated into two categories: 1. Phrase-based statistical machine translation techniques, essentially translating the incorrect source text into the corrected version (Felice et al., 2014; Junczys-Dowmunt and Grundkiewicz, 2014) 2. Averaged Perceptrons and Naive Bayes classifiers making use of native-language error correction priors (Rozovskaya et al., 2014; Rozovskaya et al., 2013). Error correction systems require very specialised models, as they need to generate an improved version of the input text, whereas a wider range of tagging and classification models can be deployed on error detection. In addition, automated writing feedback systems that indicate the presence and location of errors may be better from a pedagogic point of view, rather than providing a panacea and correcting all errors in learner text. In Section 7 we evaluate a neural sequence tagging model on the latest shared task test data, and compare it to the top participating systems on the task of error detection. 3 Sequence Labeling Architectures We construct a neural network sequence labeling framework for the task of error detection in learner writing. The model receives only a series of tokens as input, and outputs the probability of each token in the sentence being correct or incorrect in a given context. The architectures start with the vector representations of individual words, [x1, ..., xT ], where T is the length of the sentence. Different composition functions are then used to calculate a hidden vector representation of each token in context, [h1, ..., hT ]. These representations are passed through a softmax layer, producing a probability distribution over the possible labels for every token in context: pt = softmax(Woht) (1) where Wo is the weight matrix between the hidden vector ht and the output layer. We investigate six alternative neural network architectures for the task of error detection: convolutional, bidirectional recurrent, bidirectional LSTM, and multi-layer variants of each of them. In the convolutional neural network (CNN, Figure 1a) for token labeling, the hidden vector ht is calculated based on a fixed-size context window. The convolution acts as a feedforward network, using surrounding context words as input, and therefore it will learn to detect the presence of different types of n-grams. The assumption behind the convolutional architecture is that memorising erroneous token sequences from the training data is sufficient for performing error detection. The convolution uses dw tokens on either side of the target token, and the vectors for these tokens are concatenated, preserving the ordering: ct = xt−dw : ... : xt+dw (2) where x1 : x2 is used as notation for vector concatenation of x1 and x2. The combined vector is 1182 Figure 1: Alternative neural composition architectures for error detection. a) Convolutional network b) Deep convolutional network c) Recurrent bidirectional network d) Deep recurrent bidirectional network. The bottom layers are embeddings for individual tokens. The middle layers are context-dependent representations, built using different composition functions. The top layers are softmax output layers, predicting a label distribution for every input token. then passed through a non-linear layer to produce the hidden representation: ht = tanh(Wcct) (3) The deep convolutional network (Figure 1b) adds an extra convolutional layer to the architecture, using the first layer as input. It creates convolutions of convolutions, thereby capturing more complex higher-order features from the dataset. In a recurrent neural network (RNN), each hidden representation is calculated based on the current token embedding and the hidden vector at the previous time step: ht = f(Wxt + V ht−1) (4) where f(z) is a nonlinear function, such as the sigmoid function. Instead of a fixed context window, information is passed through the sentence using a recursive function and the network is able to learn which patterns to disregard or pass forward. This recurrent network structure is referred to as an Elman-type network, after Elman (1990). The bidirectional RNN (Figure 1c) consists of two recurrent components, moving in opposite directions through the sentence. While the unidirectional version takes into account only context on the left of the target token, the bidirectional version recursively builds separate context representations from either side of the target token. The left and right context are then concatenated and used as the hidden representation: h→ t = f(Wrxt + Vrh→ t−1) (5) h← t = f(Wlxt + Vlh← t+1) (6) ht = h→ t : h← t (7) Recurrent networks have been shown to perform well on the task of language modeling (Mikolov et al., 2011; Chelba et al., 2013), where they learn an incremental composition function for predicting the next token in the sequence. However, while language models can estimate the probability of each token, they are unable to differentiate between infrequent and incorrect token sequences. For error detection, the composition function needs to learn to identify semantic anomalies or ungrammatical combinations, independent of their frequency. The bidirectional model provides extra information, as it allows the network to use context on both sides of the target token. Irsoy and Cardie (2014) created an extension of this architecture by connecting together multiple layers of bidirectional Elman-type recurrent network modules. This deep bidirectional RNN (Figure 1d) calculates a context-dependent representation for each token using a bidirectional RNN, and then uses this as input to another bidirectional RNN. The multi-layer structure allows the model to learn more complex higher-level features and effectively perform multiple recurrent passes through the sentence. The long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) is an advanced alternative to the Elman-type networks that has recently become increasingly popular. It uses 1183 two separate hidden vectors to pass information between different time steps, and includes gating mechanisms for modulating its own output. LSTMs have been successfully applied to various tasks, such as speech recognition (Graves et al., 2013), machine translation (Luong et al., 2015), and natural language generation (Wen et al., 2015). Two sets of gating values (referred to as the input and forget gates) are first calculated based on the previous states of the network: it = σ(Wixt + Uiht−1 + Vfct−1 + bi) (8) ft = σ(Wfxt + Ufht−1 + Vfct−1 + bf) (9) where xt is the current input, ht−1 is the previous hidden state, bi and bf are biases, ct−1 is the previous internal state (referred to as the cell), and σ is the logistic function. The new internal state is calculated based on the current input and the previous hidden state, and then interpolated with the previous internal state using ft and it as weights: ect = tanh(Wcxt + Ucht−1 + bc) (10) ct = ft ⊙ct−1 + it ⊙ect (11) where ⊙is element-wise multiplication. Finally, the hidden state is calculated by passing the internal state through a tanh nonlinearity, and weighting it with ot. The values of ot are conditioned on the new internal state (ct), as opposed to the previous one (ct−1): ot = σ(Woxt + Uoht−1 + Voct + bo) (12) ht = ot ⊙tanh(ct) (13) Because of the linear combination in equation (11), the LSTM is less susceptible to vanishing gradients over time, thereby being able to make use of longer context when making predictions. In addition, the network learns to modulate itself, effectively using the gates to predict which operation is required at each time step, thereby incorporating higher-level features. In order to use this architecture for error detection, we create a bidirectional LSTM, making use of the advanced features of LSTM and incorporating context on both sides of the target token. In addition, we experiment with a deep bidirectional LSTM, which includes two consecutive layers of bidirectional LSTMs, modeling even more complex features and performing multiple passes through the sentence. For comparison with non-neural models, we also report results using CRFs (Lafferty et al., 2001), which are a popular choice for sequence labeling tasks. We trained the CRF++ 1 implementation on the same dataset, using as features unigrams, bigrams and trigrams in a 7-word window surrouding the target word (3 words before and after). The predicted label is also conditioned on the previous label in the sequence. 4 Experiments We evaluate the alternative network structures on the publicly released First Certificate in English dataset (FCE-public, Yannakoudakis et al. (2011)). The dataset contains short texts, written by learners of English as an additional language in response to exam prompts eliciting freetext answers and assessing mastery of the upperintermediate proficiency level. The texts have been manually error-annotated using a taxonomy of 77 error types. We use the released test set for evaluation, containing 2,720 sentences, leaving 30,953 sentences for training. We further separate 2,222 sentences from the training set for development and hyper-parameter tuning. The dataset contains manually annotated error spans of various types of errors, together with their suggested corrections. We convert this to a tokenlevel error detection task by labeling each token inside the error span as being incorrect. In order to capture errors involving missing words, the error label is assigned to the token immediately after the incorrect gap – this is motivated by the intuition that while this token is correct when considered in isolation, it is incorrect in the current context, as another token should have preceeded it. As the main evaluation measure for error detection we use F0.5, which was also the measure adopted in the CoNLL-14 shared task on error correction (Ng et al., 2014). It combines both precision and recall, while assigning twice as much weight to precision, since accurate feedback is often more important than coverage in error detection applications (Nagata and Nakatani, 2010). Following Chodorow et al. (2012), we also report raw counts for predicted and correct tokens. Related evaluation measures, such as the M2-scorer (Ng et al., 2014) and the I-measure (Felice and 1https://taku910.github.io/crfpp/ 1184 Development Test P R F0.5 predicted correct P R F0.5 CRF 62.2 13.6 36.3 914 516 56.5 8.2 25.9 CNN 52.4 24.9 42.9 3518 1620 46.0 25.7 39.8 Deep CNN 48.4 26.2 41.4 3992 1651 41.4 26.2 37.1 Bi-RNN 63.9 18.0 42.3 2333 1196 51.3 19.0 38.2 Deep Bi-RNN 60.3 17.6 40.6 2543 1255 49.4 19.9 38.1 Bi-LSTM 54.5 28.2 46.0 3898 1798 46.1 28.5 41.1 Deep Bi-LSTM 56.7 21.3 42.5 2822 1359 48.2 21.6 38.6 Table 1: Performance of the CRF and alternative neural network structures on the public FCE dataset for token-level error detection in learner writing. Briscoe, 2015), require the system to propose a correction and are therefore not directly applicable on the task of error detection. During the experiments, the input text was lowercased and all tokens that occurred less than twice in the training data were represented as a single unk token. Word embeddings were set to size 300 and initialised using the publicly released pretrained Word2Vec vectors (Mikolov et al., 2013). The convolutional networks use window size 3 on either side of the target token and produce a 300-dimensional context-dependent vector. The recurrent networks use hidden layers of size 200 in either direction. We also added an extra hidden layer of size 50 between each of the composition functions and the output layer – this allows the network to learn a separate non-linear transformation and reduces the dimensionality of the compositional vectors. The parameters were optimised using gradient descent with initial learning rate 0.001, the ADAM algorithm (Kingma and Ba, 2015) for dynamically adapting the learning rate, and batch size of 64 sentences. F0.5 on the development set was evaluated at each epoch, and the best model was used for final evaluations. 5 Results Table 1 contains results for experiments comparing different composition architectures on the task of error detection. The CRF has the lowest F0.5 score compared to any of the neural models. It memorises frequent error sequences with high precision, but does not generalise sufficiently, resulting in low recall. The ability to condition on the previous label also does not provide much help on this task – there are only two possible labels and the errors are relatively sparse. The architecture using convolutional networks performs well and achieves the second-highest result on the test set. It is designed to detect error patterns from a fixed window of 7 words, which is large enough to not require the use of more advanced composition functions. In contrast, the performance of the bidirectional recurrent network (Bi-RNN) is somewhat lower, especially on the test set. In Elman-type recurrent networks, the context signal from distant words decreases fairly rapidly due to the sigmoid activation function and diminishing gradients. This is likely why the BiRNN achieves the highest precision of all systems – the predicted label is mostly influenced by the target token and its immediate neighbours, allowing the network to only detect short highconfidence error patterns. The convolutional network, which uses 7 context words with equal attention, is able to outperform the Bi-RNN despite the fixed-size context window. The best overall result and highest F0.5 is achieved by the bidirectional LSTM composition model (Bi-LSTM). This architecture makes use of the full sentence for building context vectors on both sides of the target token, but improves on BiRNN by utilising a more advanced composition function. Through the application of a linear update for the internal cell representation, the LSTM is able to capture dependencies over longer distances. In addition, the gating functions allow it to adaptively decide which information to include in the hidden representations or output for error detection. We found that using multiple layers of compositional functions in a deeper network gave comparable or slightly lower results for all the composition architectures. This is in contrast to Ir1185 Training data Dev F0.5 Test F0.5 FCE-public 46.0 41.1 +NUCLE 39.0 41.0 +IELTS 45.6 50.7 +FCE 57.2 61.1 +CPE 59.0 62.1 +CAE 60.7 64.3 Table 2: Results on the public FCE test set when incrementally providing more training data to the error detection model. soy and Cardie (2014), who experimented with Elman-type networks and found some improvements using multiple layers of Bi-RNNs. The differences can be explained by their task benefiting from alternative features: the evaluation was performed on opinion mining where most target sequences are longer phrases that need to be identified based on their semantics, whereas many errors in learner writing are short and can only be identified by a contextual mismatch. In addition, our networks contain an extra hidden layer before the output, which allows the models to learn higherlevel representations without adding complexity through an extra compositional layer. 6 Additional Training Data There are essentially infinitely many ways of committing errors in text and introducing additional training data should alleviate some of the problems with data sparsity. We experimented with incrementally adding different error-tagged corpora into the training set and measured the resulting performance. This allows us to provide some context to the results obtained by using each of the datasets, and gives us an estimate of how much annotated data is required for optimal performance on error detection. The datasets we consider are as follows: • FCE-public – the publicly released subset of FCE (Yannakoudakis et al., 2011), as described in Section 4. • NUCLE – the NUS Corpus of Learner English (Dahlmeier et al., 2013), used as the main training set for CoNLL shared tasks on error correction. • IELTS – a subset of the IELTS examination dataset extracted from the Cambridge Learner Corpus (CLC, Nicholls (2003)), containing 68,505 sentences from all proficiency levels, also used by Felice et al. (2014). • FCE – a larger selection of FCE texts from the CLC, containing 323,192 sentences. • CPE – essays from the proficient examination level in the CLC, containing 210,678 sentences. • CAE – essays from the advanced examination level in the CLC, containing 219,953 sentences. Table 2 contains results obtained by incrementally adding training data to the Bi-LSTM model. We found that incorporating the NUCLE dataset does not improve performance over using only the FCE-public dataset, which is likely due to the two corpora containing texts with different domains and writing styles. The texts in FCE are written by young intermediate students, in response to prompts eliciting letters, emails and reviews, whereas NUCLE contains mostly argumentative essays written by advanced adult learners. The differences in the datasets offset the benefits from additional training data, and the performance remains roughly the same. Figure 2: F0.5 measure on the public FCE test set, as a function of the total number of tokens in the training set. In contrast, substantial improvements are obtained when introducing the IELTS and FCE datasets, with each of them increasing the F0.5 score by roughly 10%. The IELTS dataset contains essays from all proficiency levels, and FCE from mid-level English learners, which provides the model with a distribution of ‘average’ errors to learn from. Adding even more training data from 1186 Annotation 1 Annotation 2 predicted correct P R F0.5 correct P R F0.5 Annotator 1 2992 1800 60.2 42.9 55.7 Annotator 2 4199 1800 42.9 60.2 45.5 CAMB 2170 731 33.7 24.4 31.3 1052 48.5 25.1 40.8 CUUI 1582 550 34.8 18.4 29.5 755 47.7 18.0 35.9 AMU 1260 479 38.0 16.0 29.8 643 51.0 15.3 34.8 P1+P2+S1+S2 887 388 43.7 13.0 29.7 535 60.3 12.7 34.5 Bi-LSTM (FCE-public) 4449 683 15.4 22.8 16.4 1052 23.6 25.1 23.9 Bi-LSTM (full) 1540 627 40.7 21.0 34.3 911 59.2 21.7 44.0 Table 3: Error detection results on the two official annotations for the CoNLL-14 shared task test dataset. high-proficiency essays in CPE and CAE only provides minor further improvements. Figure 2 also shows F0.5 on the FCE-public test set as a function of the total number of tokens in the training data. The optimal trade-off between performance and data size is obtained at around 8 million tokens, after introducing the FCE dataset. 7 CoNLL-14 Shared Task The CoNLL-14 shared task (Ng et al., 2014) focussed on automatically correcting errors in learner writing. The NUCLE dataset was provided as the main training dataset, but participants were allowed to include other annotated corpora and external resources. For evaluation, 25 students were recruited to each write two new essays, which were then annotated by two experts. We used the same methods from Section 4 for converting the shared task annotation to a tokenlevel labeling task in order to evaluate the models on error detection. In addition, the correction outputs of all the participating systems were made available online, therefore we are able to report their performance on this task. In order to convert their output to error detection labels, the corrected sentences were aligned with the original input using Levenshtein distance, and any changes proposed by the system resulted in the corresponding source words being labeled as errors. The results on the two annotations of the shared task test data can be seen in Table 3. We first evaluated each of the human annotators with respect to the other, in order to estimate the upper bound on this task. The average F0.5 of roughly 50% shows that the task is difficult and even human experts have a rather low agreement. It has been shown before that correcting grammatical errors is highly subjective (Bryant and Ng, 2015), but these results indicate that trained annotators can disagree even on the number and location of errors. In the same table, we provide error detection results for the top 3 participants in the shared task: CAMB (Felice et al., 2014), CUUI (Rozovskaya et al., 2014), and AMU (Junczys-Dowmunt and Grundkiewicz, 2014). They each preserve their relative ranking also in the error detection evaluation. The CAMB system has a lower precision but the highest recall, also resulting in the highest F0.5. CUUI and AMU are close in performance, with AMU having slightly higher precision. After the official shared task, Susanto et al. (2014) published a system which combines several alternative models and outperforms the shared task participants when evaluated on error correction. However, on error detection it receives lower results, ranking 3rd and 4th when evaluated on F0.5 (P1+P2+S1+S2 in Table 3). The system has detected a small number of errors with high precision, and does not reach the highest F0.5. Finally, we present results for the Bi-LSTM sequence labeling system for error detection. Using only FCE-public for training, the overall performance is rather low as the training set is very small and contains texts from a different domain. However, these results show that the model behaves as expected – since it has not seen similar language during training, it labels a very large portion of tokens as errors. This indicates that the network is trying to learn correct language constructions from the limited data and classifies unseen structures as errors, as opposed to simply memorising error sequences from the training data. 1187 When trained on all the datasets from Section 6, the model achieves the highest F0.5 of all systems on both of the CoNLL-14 shared task test annotations, with an absolute improvement of 3% over the previous best result. It is worth noting that the full Bi-LSTM has been trained on more data than the other CoNLL contestants. However, as the shared task systems were not restricted to the NUCLE training set, all the submissions also used differing amounts of training data from various sources. In addition, the CoNLL systems are mostly combinations of many alternative models: the CAMB system is a hybrid of machine translation, a rule-based system, and a language model re-ranker; CUUI consists of different classifiers for each individual error type; and P1+P2+S1+S2 is a combination of four different error correction systems. In contrast, the Bi-LSTM is a single model for detecting all error types, and therefore represents a more scalable data-driven approach. 8 Essay Scoring In this section, we perform an extrinsic evaluation of the efficacy of the error detection system and examine the extent to which it generalises at higher levels of granularity on the task of automated essay scoring. More specifically, we replicate experiments using the text-level model described by Andersen et al. (2013), which is currently deployed in a self-assessment and tutoring system (SAT), an online automated writing feedback tool actively used by language learners.2 The SAT system predicts an overall score for a given text, which provides a holistic assessment of linguistic competence and language proficiency. The authors trained a supervised ranking perceptron model on the FCE-public dataset, using features such as error-rate estimates from a language model and various lexical and grammatical properties of text (e.g., word n-grams, part-of-speech n-grams and phrase-structure rules). We replicate this experiment and add the average probability of each token in the essay being correct, according to the error detection model, as an additional feature for the scoring framework. The system was then retrained on FCE-public and evaluated on correctly predicting the assigned essay score. Table 4 presents the experimental results. The human performance on the test set is cal2http://www.cambridgeenglish.org/learning-english/freeresources/write-and-improve/ r ρ Human annotators 79.6 79.2 SAT 75.1 76.0 SAT + Bi-LSTM (FCE-public) 76.0 77.0 SAT + Bi-LSTM (full) 78.0 79.9 Table 4: Pearson’s correlation r and Spearman’s correlation ρ on the public FCE test set on the task of automated essay scoring. culated as the average inter-annotator correlation on the same data, and the existing SAT system has demonstrated levels of performance that are very close to that of human assessors. Nevertheless, the Bi-LSTM model trained only on FCE-public complements the existing features, and the combined model achieves an absolute improvement of around 1% percent, corresponding to 20-31% relative error reduction with respect to the human performance. Even though the Bi-LSTM is trained on the same dataset and the SAT system already includes various linguistic features for capturing errors, our error detection model manages to further improve its performance. When the Bi-LSTM is trained on all the available data from Section 6, the combination achieves further substantial improvements. The relative error reduction on Pearson’s correlation is 64%, and the system actually outperforms human annotators on Spearman’s correlation. 9 Conclusions In this paper, we presented the first experiments using neural network models for the task of error detection in learner writing. Six alternative compositional network architectures for modeling context were evaluated. Based on the findings, we propose a novel error detection framework using token-level embeddings, bidirectional LSTMs for context representation, and a multi-layer architecture for learning more complex features. This structure allows the model to classify each token as being correct or incorrect, using the full sentence as context. The self-modulation architecture of LSTMs was also shown to be beneficial, as it allows the network to learn more advanced composition rules and remember dependencies over longer distances. Substantial performance improvements were achieved by training the best model on additional 1188 datasets. We found that the largest benefit was obtained from training on 8 million tokens of text from learners with varying levels of language proficiency. In contrast, including even more data from higher-proficiency learners gave marginal further improvements. As part of future work, it would be beneficial to investigate the effect of automatically generated training data for error detection (e.g., Rozovskaya and Roth (2010)). We evaluated the performance of existing error correction systems from CoNLL-14 on the task of error detection. The experiments showed that success on error correction does not necessarily mean success on error detection, as the current best correction system (P1+P2+S1+S2) is not the same as the best shared task detection system (CAMB). In addition, the neural sequence tagging model, specialised for error detection, was able to outperform all other participating systems. Finally, we performed an extrinsic evaluation by incorporating probabilities from the error detection system as features in an essay scoring model. Even without any additional data, the combination further improved performance which is already close to the results from human annotators. In addition, when the error detection model was trained on a larger training set, the essay scorer was able to exceed human-level performance. Acknowledgments We would like to thank Prof Ted Briscoe and the reviewers for providing useful feedback. References Øistein E. Andersen, Helen Yannakoudakis, Fiona Barker, and Tim Parish. 2013. Developing and testing a self-assessment and tutoring system. Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model Yoshua. Journal of Machine Learning Research, 3. Christopher Bryant and Hwee Tou Ng. 2015. How Far are We from Fully Automatic High Quality Grammatical Error Correction? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Ciprian Chelba, Tom´aˇs Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. In arXiv preprint. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the Properties of Neural Machine Translation: EncoderDecoder Approaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). Martin Chodorow and Claudia Leacock. 2000. An unsupervised method for detecting grammatical errors. In Proceedings of the first conference on North American chapter of the Association for Computational Linguistics. Martin Chodorow, Joel R. Tetreault, and Na-Rae Han. 2007. Detection of grammatical errors involving prepositions. In Proceedings of the 4th ACLSIGSEM Workshop on Prepositions. Martin Chodorow, Markus Dickinson, Ross Israel, and Joel Tetreault. 2012. Problems in evaluating grammatical error detection systems. In COLING 2012. George. E. Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Robert Dale and Adam Kilgarriff. 2011. Helping Our Own: The HOO 2011 Pilot Shared Task. In Proceedings of the 13th European Workshop on Natural Language Generation. Robert Dale, Ilya Anisimoff, and George Narroway. 2012. HOO 2012: A report on the Preposition and Determiner Error Correction Shared Task. In The Seventh Workshop on Building Educational Applications Using NLP. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive science, 14(2). 1189 Mariano Felice and Ted Briscoe. 2015. Towards a standard evaluation method for grammatical error detection and correction. In The 2015 Annual Conference of the North American Chapter of the ACL. Mariano Felice, Zheng Yuan, Øistein E. Andersen, Helen Yannakoudakis, and Ekaterina Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. Conference on Computational Natural Language Learning: Shared Task (CoNLL2014). Michael Gamon. 2011. High-Order Sequence Modeling for Language Learner Error Detection. Proceedings of the Sixth Workshop on Innovative Use of NLP for Building Educational Applications. Alex Graves, Navdeep Jaitly, and Abdel Rahman Mohamed. 2013. Hybrid speech recognition with Deep Bidirectional LSTM. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU 2013). Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2004. Detecting Errors in English Article Usage with a Maximum Entropy Classifier Trained on a Large, Diverse Corpus. Proceedings of the 4th International Conference on Language Resources and Evaluation. Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2006. Detecting errors in English article usage by non-native speakers. Natural Language Engineering, 12. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-term Memory. Neural Computation, 9. Ozan Irsoy and Claire Cardie. 2014. Opinion Mining with Deep Recurrent Neural Networks. In EMNLP2014. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2014. The AMU System in the CoNLL-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation. Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task (CoNLL-2014). Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013). Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: a Method for Stochastic Optimization. In International Conference on Learning Representations. Ekaterina Kochmar and Ted Briscoe. 2014. Detecting Learner Errors in the Choice of Content Words Using Compositional Distributional Semantics. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning. Citeseer. Claudia Leacock, Martin Chodorow, Michael Gamon, and Joel R. Tetreault. 2014. Automated Grammatical Error Detection for Language Learners: Second Edition. John Lee and Stephanie Seneff. 2008. Correcting misuse of verb forms. In Proceedings of the 46th Annual Meeting of the ACL. Mnih-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Tom´aˇs Mikolov, Stefan Kombrink, Anoop Deoras, Luk´aˇs Burget, and Jan ˇCernock´y. 2011. RNNLM-Recurrent neural network language modeling toolkit. In ASRU 2011 Demo Session. Tom´aˇs Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations (ICLR 2013). Ryo Nagata and Kazuhide Nakatani. 2010. Evaluating performance of grammatical error detection to maximize learning effect. Coling 2010: Poster Volume. Hwee Tou Ng, Yuanbin Wu, and Christian Hadiwinoto. 2013. The CoNLL-2013 Shared Task on Grammatical Error Correction. Computational Natural Language Learning (CoNLL), Shared Task. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 Shared Task on Grammatical Error Correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. Diane Nicholls. 2003. The Cambridge Learner Corpus - error coding and analysis for lexicography and ELT. Proceedings of the Corpus Linguistics 2003 Conference. Alla Rozovskaya and Dan Roth. 2010. Training Paradigms for Correcting Errors in Grammar and Usage. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Alla Rozovskaya, Mark Sammons, Joshua Gioja, and Dan Roth. 2011. University of Illinois System in HOO Text Correction Shared Task. Proceedings of the 13th European Workshop on Natural Language Generation (ENLG). 1190 Alla Rozovskaya, Kai-Wei Chang, Mark Sammons, and Dan Roth. 2013. The University of Illinois System in the CoNLL-2013 Shared Task. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task. Alla Rozovskaya, Kai-Wei Chang, Mark Sammons, Dan Roth, and Nizar Habash. 2014. The IllinoisColumbia System in the CoNLL-2014 Shared Task. Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task (CoNLL-2014). Raymond Hendy Susanto, Peter Phandi, and Hwee Tou Ng. 2014. System Combination for Grammatical Error Correction. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP-2014). Joel R. Tetreault and Martin Chodorow. 2008. The Ups and Downs of Preposition Error Detection in ESL Writing. Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008). Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems. EMNLP 2015. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A New Dataset and Method for Automatically Grading ESOL Texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 1191
2016
112
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1192–1202, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Neural Semantic Role Labeling with Dependency Path Embeddings Michael Roth and Mirella Lapata School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB {mroth,mlap}@inf.ed.ac.uk Abstract This paper introduces a novel model for semantic role labeling that makes use of neural sequence modeling techniques. Our approach is motivated by the observation that complex syntactic structures and related phenomena, such as nested subordinations and nominal predicates, are not handled well by existing models. Our model treats such instances as subsequences of lexicalized dependency paths and learns suitable embedding representations. We experimentally demonstrate that such embeddings can improve results over previous state-of-the-art semantic role labelers, and showcase qualitative improvements obtained by our method. 1 Introduction The goal of semantic role labeling (SRL) is to identify and label the arguments of semantic predicates in a sentence according to a set of predefined relations (e.g., “who” did “what” to “whom”). Semantic roles provide a layer of abstraction beyond syntactic dependency relations, such as subject and object, in that the provided labels are insensitive to syntactic alternations and can also be applied to nominal predicates. Previous work has shown that semantic roles are useful for a wide range of natural language processing tasks, with recent applications including statistical machine translation (Aziz et al., 2011; Xiong et al., 2012), plagiarism detection (Osman et al., 2012; Paul and Jamal, 2015), and multi-document abstractive summarization (Khan et al., 2015). The task of semantic role labeling (SRL) was pioneered by Gildea and Jurafsky (2002). In System Analysis mate-tools *He had [troubleA0] raising [fundsA1]. mateplus *He had [troubleA0] raising [fundsA1]. TensorSRL *He had trouble raising [fundsA1]. easySRL *He had trouble raising [fundsA1]. This work [HeA0] had trouble raising [fundsA1]. Table 1: Outputs of SRL systems for the sentence He had trouble raising funds. Arguments of raise are shown with predicted roles as defined in PropBank (A0: getter of money; A1: money). Asterisks mark flawed analyses that miss the argument He. their work, features based on syntactic constituent trees were identified as most valuable for labeling predicate-argument relationships. Later work confirmed the importance of syntactic parse features (Pradhan et al., 2005; Punyakanok et al., 2008) and found that dependency parse trees provide a better form of representation to assign role labels to arguments (Johansson and Nugues, 2008). Most semantic role labeling approaches to date rely heavily on lexical and syntactic indicator features. Through the availability of large annotated resources, such as PropBank (Palmer et al., 2005), statistical models based on such features achieve high accuracy. However, results often fall short when the input to be labeled involves instances of linguistic phenomena that are relevant for the labeling decision but appear infrequently at training time. Examples include control and raising verbs, nested conjunctions or other recursive structures, as well as rare nominal predicates. The difficulty lies in that simple lexical and syntactic indicator features are not able to model interactions triggered by such phenomena. For instance, con1192 sider the sentence He had trouble raising funds and the analyses provided by four publicly available tools in Table 1 (mate-tools, Bj¨orkelund et al. (2010); mateplus, Roth and Woodsend (2014); TensorSRL, Lei et al. (2015); and easySRL, Lewis et al. (2015)). Despite all systems claiming stateof-the-art or competitive performance, none of them is able to correctly identify He as the agent argument of the predicate raise. Given the complex dependency path relation between the predicate and its argument, none of the systems actually identifies He as an argument at all. In this paper, we develop a new neural network model that can be applied to the task of semantic role labeling. The goal of this model is to better handle control predicates and other phenomena that can be observed from the dependency structure of a sentence. In particular, we aim to model the semantic relationships between a predicate and its arguments by analyzing the dependency path between the predicate word and each argument head word. We consider lexicalized paths, which we decompose into sequences of individual items, namely the words and dependency relations on a path. We then apply long-short term memory networks (Hochreiter and Schmidhuber, 1997) to find a recurrent composition function that can reconstruct an appropriate representation of the full path from its individual parts (Section 2). To ensure that representations are indicative of semantic relationships, we use semantic roles as target labels in a supervised setting (Section 3). By modeling dependency paths as sequences of words and dependencies, we implicitly address the data sparsity problem. This is the case because we use single words and individual dependency relations as the basic units of our model. In contrast, previous SRL work only considered full syntactic paths. Experiments on the CoNLL-2009 benchmark dataset show that our model is able to outperform the state-of-the-art in English (Section 4), and that it improves SRL performance in other languages, including Chinese, German and Spanish (Section 5). 2 Dependency Path Embeddings In the context of neural networks, the term embedding refers to the output of a function f within the network, which transforms an arbitrary input into a real-valued vector output. Word embeddings, for instance, are typically computed by forwarding a ROOT he A0 had trouble raising raise.01 funds A1 SBJ OBJ NMOD OBJ Figure 1: Dependency path (dotted) between the predicate raising and the argument he. one-hot word vector representation from the input layer of a neural network to its first hidden layer, usually by means of matrix multiplication and an optional non-linear function whose parameters are learned during neural network training. Here, we seek to compute real-valued vector representations for dependency paths between a pair of words ⟨wi,w j⟩. We define a dependency path to be the sequence of nodes (representing words) and edges (representing relations between words) to be traversed on a dependency parse tree to get from node wi to node w j. In the example in Figure 1, the dependency path from raising to he is raising NMOD −−−→trouble OBJ −−→had SBJ ←−he. Analogously to how word embeddings are computed, the simplest way to embed paths would be to represent each sequence as a one-hot vector. However, this is suboptimal for two reasons: Firstly, we expect only a subset of dependency paths to be attested frequently in our data and therefore many paths will be too sparse to learn reliable embeddings for them. Secondly, we hypothesize that dependency paths which share the same words, word categories or dependency relations should impact SRL decisions in similar ways. Thus, the words and relations on the path should drive representation learning, rather than the full path on its own. The following sections describe how we address representation learning by means of modeling dependency paths as sequences of items in a recurrent neural network. 2.1 Recurrent Neural Networks The recurrent model we use in this work is a variant of the long-short term memory (LSTM) network. It takes a sequence of items X = x1,...,xn as input, recurrently processes each item xt ∈X at a time, and finally returns one embedding state en for the complete input sequence. For each time 1193 he N had V trouble N raising V ... subj obj nmod x1 xn en next layer Figure 2: Example input and embedding computation for the path from raising to he, given the sentence he had trouble raising funds. LSTM time steps are displayed from right to left. step t, the LSTM model updates an internal memory state mt that depends on the current input as well as the previous memory state mt−1. In order to capture long-term dependencies, a so-called gating mechanism controls the extent to which each component of a memory cell state will be modified. In this work, we employ input gates i, output gates o and (optional) forget gates f. We formalize the state of the network at each time step t as follows: it = σ([Wmimt−1]+Wxixt +bi) (1) ft = σ([Wmfmt−1]+Wxfxt +bf) (2) mt = it ⊙(Wxmxt)+ft ⊙mt−1 +bm (3) ot = σ([Wmomt]+Wxoxt +bo) (4) et = ot ⊙σ(mt) (5) In each equation, W describes a matrix of weights to project information between two layers, b is a layer-specific vector of bias terms, and σ is the logistic function. Superscripts indicate the corresponding layers or gates. Some models described in Section 3 do not make use of forget gates or memory-to-gate connections. In case no forget gate is used, we set ft = 1. If no memoryto-gate connections are used, the terms in square brackets in (1), (2), and (4) are replaced by zeros. 2.2 Embedding Dependency Paths We define the embedding of a dependency path to be the final memory output state of a recurrent LSTM layer that takes a path as input, with each input step representing a binary indicator for a part-of-speech tag, a word form, or a dependency relation. In the context of semantic role labeling, we define each path as a sequence from a predicate to its potential argument.1 Specifically, we define the first item x1 to correspond to the part-of-speech tag of the predicate word wi, followed by its actual word form, and the relation to the next word wi+1. The embedding of a dependency path corresponds to the state en returned by the LSTM layer after the input of the last item, xn, which corresponds to the word form of the argument head word w j. An example is shown in Figure 2. The main idea of this model and representation is that word forms, word categories and dependency relations can all influence role labeling decisions. The word category and word form of the predicate first determine which roles are plausible and what kinds of path configurations are to be expected. The relations and words seen on the path can then manipulate these expectations. In Figure 2, for instance, the verb raising complements the phrase had trouble, which makes it likely that the subject he is also the logical subject of raising. By using word forms, categories and dependency relations as input items, we ensure that specific words (e.g., those which are part of complex predicates) as well as various relation types (e.g., subject and object) can appropriately influence the representation of a path. While learning corresponding interactions, the network is also able to determine which phrases and dependency relations might not influence a role assignment decision (e.g., coordinations). 2.3 Joint Embedding and Feature Learning Our SRL model consists of four components depicted in Figure 3: (1) an LSTM component takes lexicalized dependency paths as input, (2) an additional input layer takes binary features as input, (3) a hidden layer combines dependency path embeddings and binary features using rectified linear units, and (4) a softmax classification layer produces output based on the hidden layer state as input. We therefore learn path embeddings jointly with feature detectors based on traditional, binary indicator features. Given a dependency path X, with steps xk ∈ {x1,...,xn}, and a set of binary features B as input, we use the LSTM formalization from equations (1–5) to compute the embedding en at time 1We experimented with different sequential orders and found this to lead to the best validation set results. 1194 (1) x1 Input from path X . . . xn i1 m0 f1 LSTM cell m1 o1 e1 . . . en Vector of binary indicator features B (2) (3) h (4) s class label c Figure 3: Neural model for joint learning of path embeddings and higher-order features: The path sequence x1 ...xn is fed into a LSTM layer, a hidden layer h combines the final embedding en and binary input features B, and an output layer s assigns the highest probable class label c. step n and formalize the state of the hidden layer h and softmax output sc for each class category c as follows: h = max(0,WBhB+Wehen +bh) (6) sc = Wes c en +Whs c h+bs c Σi(Wes i en +Whs i h+bs i) (7) 3 System Architecture The overall architecture of our SRL system closely follows that of previous work (Toutanova et al., 2008; Bj¨orkelund et al., 2009) and is depicted in Figure 4. We use a pipeline that consists of the following steps: predicate identification and disambiguation, argument identification, argument classification, and re-ranking. The neural-network components introduced in Section 2 are used in the last three steps. The following sub-sections describe all components in more detail. 3.1 Predicate Identification and Disambiguation Given a syntactically analyzed sentence, the first two steps in an end-to-end SRL system are to identify and disambiguate the semantic predicates in the sentence. Here, we focus on verbal and nominal predicates but note that other syntactic categories have also been construed as predicates in the NLP literature (e.g., prepositions; Srikumar and Roth (2013)). For both identification and disambiguation steps, we apply the same logistic reHe had trouble raising funds. PREDICATE IDENTIFICATION PREDICATE DISAMBIGUATION raise.01 sense ARGUMENT IDENTIFICATION ARG? ARG? 2nd best arg 1st best arg ARGUMENT CLASSIFICATION A0 A1 A0 best label best label 2nd best RERANKER he funds funds funds raise.01 raise.01 raise.01 A0 A1 best overall scoring structure score score OUTPUT HeA0 had trouble raising fundsA1. Figure 4: Pipeline architecture of our SRL system. gression classifiers used in the SRL components of mate-tools (Bj¨orkelund et al., 2010). The classifiers for both tasks make use of a range of lexicosyntactic indicator features, including predicate word form, its predicted part-of-speech tag as well as dependency relations to all syntactic children. 3.2 Argument Identification and Classification Given a sentence and a set of sense-disambiguated predicates in it, the next two steps of our SRL system are to identify all arguments of each predicate and to assign suitable role labels to them. For both steps, we train several LSTM-based neural network models as described in Section 2. In particular, we train separate networks for nominal and verbal predicates and for identification and classification. Following the findings of earlier work (Xue and Palmer, 2004), we assume that different feature sets are relevant for the respective tasks and hence different embedding representations should be learned. As binary input features, we use the following sets from the SRL literature (Bj¨orkelund et al., 2010). 1195 Argument labeling step forget gate memory→gates |e| |h| alpha dropout rate Identification (verb) − + 25 90 0.0006 0.42 Identification (noun) − + 16 125 0.0009 0.25 Classification (verb) + − 5 300 0.0155 0.50 Classification (noun) − − 88 500 0.0055 0.46 Table 2: Hyperparameters selected for best models and training procedures Lexico-syntactic features Word form and word category of the predicate and candidate argument; dependency relations from predicate and argument to their respective syntactic heads; full dependency path sequence from predicate to argument. Local context features Word forms and word categories of the candidate argument’s and predicate’s syntactic siblings and children words. Other features Relative position of the candidate argument with respect to the predicate (left, self, right); sequence of part-of-speech tags of all words between the predicate and the argument. 3.3 Reranker As all argument identification (and classification) decisions are independent of one another, we apply as the last step of our pipeline a global reranker. Given a predicate p, the reranker takes as input the n best sets of identified arguments as well as their n best label assignments and predicts the best overall argument structure. We implement the reranker as a logistic regression classifier, with hidden and embedding layer states of identified arguments as features, offset by the argument label, and a binary label as output (1: best predicted structure, 0: any other structure). At test time, we select the structure with the highest overall score, which we compute as the geometric mean of the global regression and all argument-specific scores. 4 Experiments In this section, we demonstrate the usefulness of dependency path embeddings for semantic role labeling. Our hypotheses are that (1) modeling dependency paths as sequences will lead to better representations for the SRL task, thus increasing labeling precision overall, and that (2) embeddings will address the problem of data sparsity, leading to higher recall. To test both hypotheses, we experiment on the in-domain and out-of-domain test sets provided in the CoNLL-2009 shared task (Hajiˇc et al., 2009) and compare results of our system, henceforth PathLSTM, with systems that do not involve path embeddings. We compute precision, recall and F1-score using the official CoNLL-2009 scorer.2 The code is available at https://github.com/microth/PathLSTM. Model selection We train argument identification and classification models using the XLBP toolkit for neural networks (Monner and Reggia, 2012). The hyperparameters for each step were selected based on the CoNLL 2009 development set. For direct comparison with previous work, we use the same preprocessing models and predicate-specific SRL components as provided with mate-tools (Bohnet, 2010; Bj¨orkelund et al., 2010). The types and ranges of hyperparameters considered are as follows: learning rate α ∈ [0.00006,0.3], dropout rate d ∈[0.0,0.5], and hidden layer sizes |e| ∈[0,100], |h| ∈[0,500]. In addition, we experimented with different gating mechanisms (with/without forget gate) and memory access settings (with/without connections between all gates and the memory layer, cf. Section 2). The best parameters were chosen using the Spearmint hyperparameter optimization toolkit (Snoek et al., 2012), applied for approx. 200 iterations, and are summarized in Table 2. Results The results of our in- and out-of-domain experiments are summarized in Tables 3 and 5, respectively. We present results for different system configurations: ‘local’ systems make classification decisions independently, whereas ‘global’ systems include a reranker or other global inference mechanisms; ‘single’ refers to one model and ‘ensemble’ refers to combinations of multiple models. In the in-domain setting, our PathLSTM model achieves 87.7% (single) and 87.9% (ensemble) F1score, outperforming previously published best re2Some recently proposed SRL models are only evaluated on the CoNLL 2005 and 2012 data sets, which lack nominal predicates or dependency annotations. We do not list any results from those models here. 1196 System (local, single) P R F1 Bj¨orkelund et al. (2010) 87.1 84.5 85.8 Lei et al. (2015) − − 86.6 FitzGerald et al. (2015) − − 86.7 PathLSTM w/o reranker 88.1 85.3 86.7 System (global, single) P R F1 Bj¨orkelund et al. (2010) 88.6 85.2 86.9 Roth and Woodsend (2014)3 − − 86.3 FitzGerald et al. (2015) − − 87.3 PathLSTM 90.0 85.5 87.7 System (global, ensemble) P R F1 FitzGerald et al. 10 models − − 87.7 PathLSTM 3 models 90.3 85.7 87.9 Table 3: Results on the CoNLL-2009 in-domain test set. All numbers are in percent. PathLSTM P (%) R (%) F1 (%) w/o path embeddings 65.7 87.3 75.0 w/o binary features 73.2 33.3 45.8 Table 4: Ablation tests in the in-domain setting. sults by 0.4 and 0.2 percentage points, respectively. At a F1-score of 86.7%, our local model (using no reranker) reaches the same performance as state-of-the-art local models. Note that differences in results between systems might originate from the application of different preprocessing techniques as each system comes with its own syntactic components. For direct comparison, we evaluate against mate-tools, which use the same preprocessing techniques as PathLSTM. In comparison, we see improvements of +0.8–1.0 percentage points absolute in F1-score. In the out-of-domain setting, our system achieves new state-of-the-art results of 76.1% (single) and 76.5% (ensemble) F1-score, outperforming the previous best system by Roth and Woodsend (2014) by 0.2 and 0.6 absolute points, respectively. In comparison to mate-tools, we observe absolute improvements in F1-score of +0.4–0.8%. Discussion To determine the sources of individual improvements, we test PathLSTM models without specific feature types and directly compare PathLSTM and mate-tools, both of which use 3Results are taken from Lei et al. (2015). System (local, single) P R F1 Bj¨orkelund et al. (2010) 75.7 72.2 73.9 Lei et al. (2015) − − 75.6 FitzGerald et al. (2015) − − 75.2 PathLSTM w/o reranker 76.9 73.8 75.3 System (global, single) P R F1 Bj¨orkelund et al. (2010) 77.9 73.6 75.7 Roth and Woodsend (2014)3 − − 75.9 FitzGerald et al. (2015) − − 75.2 PathLSTM 78.6 73.8 76.1 System (global, ensemble) P R F1 FitzGerald et al. 10 models − − 75.5 PathLSTM 3 models 79.7 73.6 76.5 Table 5: Results on the CoNLL-2009 out-ofdomain test set. All numbers are in percent. the same preprocessing methods. Table 4 presents in-domain test results for our system when specific feature types are omitted. The overall low results indicate that a combination of dependency path embeddings and binary features is required to identify and label arguments with high precision. Figure 5 shows the effect of dependency path embeddings at mitigating sparsity: if the path between a predicate and its argument has not been observed at training time or only infrequently, conventional methods will often fail to assign a role. This is represented by the recall curve of mate-tools, which converges to zero for arguments with unseen paths. The higher recall curve for PathLSTM demonstrates that path embeddings can alleviate this problem to some extent. For unseen paths, we observe that PathLSTM improves over mate-tools by an order of magnitude, from 0.9% to 9.6%. The highest absolute gain, from 12.8% to 24.2% recall, can be observed for dependency paths that occurred between 1 and 10 times during training. Figure 7 plots role labeling performance for sentences with varying number of words. There are two categories of sentences in which the improvements of PathLSTM are most noticeable: Firstly, it better handles short sentences that contain expletives and/or nominal predicates (+0.8% absolute in F1-score). This is probably due to the fact that our learned dependency path representations are lexicalized, making it possible to model 1197 (Nested) subject control Coordinations involving A1 Relative clauses Coordinations involving A0 Complements of nominal predicates treasuryA0 ’s threat to trash the firmA1 , which was involved KeatingA0 has conceded attempted to buy tradingA1 was stopped and did not resume ◦A0 • A1 Figure 6: Dots correspond to the path representation of a predicate-argument instance in 2D space. White/black color indicates A0/A1 gold argument labels. Dotted ellipses denote instances exhibiting related syntactic phenomena (see rectangles for a description and dotted rectangles for linguistic examples). Example phrases show actual output produced by PathLSTM (underlined). 0 1–10 ≤102 ≤103 ≤104 >104 100 1 10 Number of training instances with same path Recall (%) PathLSTM mate-tools 0.9 9.6 24.2 12.8 36.4 30.1 49.6 76.4 88.3 Figure 5: Results on in-domain test instances, grouped by the number of training instances that have an identical (unlexicalized) dependency path. argument structures of different nominals and distinguishing between expletive occurrences of ‘it’ and other subjects. Secondly, it improves performance on longer sentences (up to +1.0% absolute in F1-score). This is mainly due to the handling of dependency paths that involve complex structures, such as coordinations, control verbs and nominal predicates. We collect instances of different syntactic phenomena from the development set and plot the learned dependency path representations in the embedding space (see Figure 6). We obtain a projection onto two dimensions using t-SNE (Van der Maaten and Hinton, 2008). Interestingly, we can 1–10 11–15 16–20 21–25 26–30 31– 85 86 87 88 89 90 Number of words F1-score (%) PathLSTM mate-tools 89.3 90.1 (+0.8) 86.1 87.1 (+1.0) 87.5 87.9 (+0.4) Figure 7: Results by sentence length. Improvements over mate-tools shown in parentheses. see that different syntactic configurations are clustered together in different parts of the space and that most instances of the PropBank roles A0 and A1 are separated. Example phrases in the figure highlight predicate-argument pairs that are correctly labeled by PathLSTM but not by mate-tools. Path embeddings are essential for handling these cases as indicator features do not generalize well enough. Finally, Table 6 shows results for nominal and verbal predicates as well as for different (gold) role labels. In comparison to mate-tools, we can see that PathLSTM improves precision for all argument types of nominal predicates. For verbal predicates, improvements can be observed in 1198 Predicate POS Improvement & Role Label PathLSTM over mate-tools P (%) R (%) P (%) R (%) verb / A0 90.8 89.2 −0.4 +1.8 verb / A1 91.0 91.9 +0.0 +1.1 verb / A2 84.3 76.9 +1.5 +0.0 verb / AM 82.2 72.4 +2.9 −2.0 noun / A0 86.9 78.2 +0.8 +3.3 noun / A1 87.5 84.4 +2.6 +2.2 noun / A2 82.4 76.8 +1.0 +2.1 noun / AM 79.5 69.2 +0.9 −2.8 Table 6: Results by word category and role label. terms of recall of proto-agent (A0) and protopatient (A1) roles, with slight gains in precision for the A2 role. Overall, PathLSTM does slightly worse with respect to modifier roles, which it labels with higher precision but at the cost of recall. 5 Path Embeddings in other Languages In this section, we report results from additional experiments on Chinese, German and Spanish data. The underlying question is to which extent the improvements of our SRL system for English also generalize to other languages. To answer this question, we train and test separate SRL models for each language, using the system architecture and hyperparameters discussed in Sections 3 and 4, respectively. We train our models on data from the CoNLL-2009 shared task, relying on the same features as one of the participating systems (Bj¨orkelund et al., 2009), and evaluate with the official scorer. For direct comparison, we rely on the (automatic) syntactic preprocessing information provided with the CoNLL test data and compare our results with the best two systems for each language that make use of the same preprocessing information. The results, summarized in Table 7, indicate that PathLSTM performs better than the system by Bj¨orkelund et al. (2009) in all cases. For German and Chinese, PathLSTM achieves the best overall F1-scores of 80.1% and 79.4%, respectively. 6 Related Work Neural Networks for SRL Collobert et al. (2011) pioneered neural networks for the task of Chinese P R F1 PathLSTM 83.2 75.9 79.4 Bj¨orkelund et al. (2009) 82.4 75.1 78.6 Zhao et al. (2009) 80.4 75.2 77.7 German P R F1 PathLSTM 81.8 78.5 80.1 Bj¨orkelund et al. (2009) 81.2 78.3 79.7 Che et al. (2009) 82.1 75.4 78.6 Spanish P R F1 Zhao et al. (2009) 83.1 78.0 80.5 PathLSTM 83.2 77.4 80.2 Bj¨orkelund et al. (2009) 78.9 74.3 76.5 Table 7: Results (in percentage) on the CoNLL2009 test sets for Chinese, German and Spanish. semantic role labeling. They developed a feedforward network that uses a convolution function over windows of words to assign SRL labels. Apart from constituency boundaries, their system does not make use of any syntactic information. Foland and Martin (2015) extended their model and showcased significant improvements when including binary indicator features for dependency paths. Similar features were used by FitzGerald et al. (2015), who include role labeling predictions by neural networks as factors in a global model. These approaches all make use of binary features derived from syntactic parses either to indicate constituency boundaries or to represent full dependency paths. An extreme alternative has been recently proposed in Zhou and Xu (2015), who model SRL decisions with a multi-layered LSTM network that takes word sequences as input but no syntactic parse information at all. Our approach falls in between the two extremes: we rely on syntactic parse information but rather than solely making using of sparse binary features, we explicitly model dependency paths in a neural network architecture. Other SRL approaches Within the SRL literature, recent alternatives to neural network architectures include sigmoid belief networks (Henderson et al., 2013) as well as low-rank tensor models (Lei et al., 2015). Whereas Lei et al. only make use of dependency paths as binary indicator features, Henderson et al. propose a joint model for syntactic and semantic parsing that learns and ap1199 plies incremental dependency path representations to perform SRL decisions. The latter form of representation is closest to ours, however, we do not build syntactic parses incrementally. Instead, we take syntactically preprocessed text as input and focus on the SRL task only. Apart from more powerful models, most recent progress in SRL can be attributed to novel features. For instance, Deschacht and Moens (2009) and Huang and Yates (2010) use latent variables, learned with a hidden markov model, as features for representing words and word sequences. Zapirain et al. (2013) propose different selection preference models in order to deal with the sparseness of lexical features. Roth and Woodsend (2014) address the same problem with word embeddings and compositions thereof. Roth and Lapata (2015) recently introduced features that model the influence of discourse on role labeling decisions. Rather than coming up with completely new features, in this work we proposed to revisit some well-known features and represent them in a novel way that generalizes better. Our proposed model is inspired both by the necessity to overcome the problems of sparse lexico-syntactic features and by the recent success of SRL models based on neural networks. Dependency-based embeddings The idea of embedding dependency structures has previously been applied to tasks such as relation classification and sentiment analysis. Xu et al. (2015) and Liu et al. (2015) use neural networks to embed dependency paths between entity pairs. To identify the relation that holds between two entities, their approaches make use of pooling layers that detect parts of a path that indicate a specific relation. In contrast, our work aims at modeling an individual path as a complete sequence, in which every item is of relevance. Tai et al. (2015) and Ma et al. (2015) learn embeddings of dependency structures representing full sentences, in a sentiment classification task. In our model, embeddings are learned jointly with other features, and as a result problems that may result from erroneous parse trees are mitigated. 7 Conclusions We introduced a neural network architecture for semantic role labeling that jointly learns embeddings for dependency paths and feature combinations. Our experimental results indicate that our model substantially increases classification performance, leading to new state-of-the-art results. In a qualitive analysis, we found that our model is able to cover instances of various linguistic phenomena that are missed by other methods. Beyond SRL, we expect dependency path embeddings to be useful in related tasks and downstream applications. For instance, our representations may be of direct benefit for semantic and discourse parsing tasks. The jointly learned feature space also makes our model a good starting point for cross-lingual transfer methods that rely on feature representation projection to induce new models (Kozhevnikov and Titov, 2014). Acknowledgements We thank the three anonymous ACL referees whose feedback helped to substantially improve the present paper. The support of the Deutsche Forschungsgemeinschaft (Research Fellowship RO 4848/1-1; Roth) and the European Research Council (award number 681760; Lapata) is gratefully acknowledged. References Wilker Aziz, Miguel Rios, and Lucia Specia. 2011. Shallow semantic trees for smt. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 316–322, Edinburgh, Scotland. Anders Bj¨orkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 43–48, Boulder, Colorado. Anders Bj¨orkelund, Bernd Bohnet, Love Hafdell, and Pierre Nugues. 2010. A high-performance syntactic and semantic dependency parser. In Coling 2010: Demonstration Volume, pages 33–36, Beijing, China. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 89–97, Beijing, China. Wanxiang Che, Zhenghua Li, Yongqiang Li, Yuhang Guo, Bing Qin, and Ting Liu. 2009. Multilingual dependency-based syntactic and semantic parsing. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 49–54, Boulder, Colorado. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. 1200 Koen Deschacht and Marie-Francine Moens. 2009. Semi-supervised semantic role labeling using the Latent Words Language Model. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 21–29, Singapore. Nicholas FitzGerald, Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 960–970, Lisbon, Portugal. William Foland and James Martin. 2015. Dependency-based semantic role labeling using convolutional neural networks. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 279–288, Denver, Colorado. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, et al. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–18, Boulder, Colorado. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics, 39(4):949–998. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Fei Huang and Alexander Yates. 2010. Open-domain semantic role labeling by modeling word spans. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 968– 978, Uppsala, Sweden. Richard Johansson and Pierre Nugues. 2008. The effect of syntactic representation on semantic role labeling. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 393–400, Manchester, United Kingdom. Atif Khan, Naomie Salim, and Yogan Jaya Kumar. 2015. A framework for multi-document abstractive summarization based on semantic role labelling. Applied Soft Computing, 30:737–747. Mikhail Kozhevnikov and Ivan Titov. 2014. Crosslingual model transfer using feature representation projection. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 579–585, Baltimore, Maryland. Tao Lei, Yuan Zhang, Llu´ıs M`arquez, Alessandro Moschitti, and Regina Barzilay. 2015. High-order lowrank tensors for semantic role labeling. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1150–1160, Denver, Colorado. Mike Lewis, Luheng He, and Luke Zettlemoyer. 2015. Joint A* CCG parsing and semantic role labelling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1444–1454, Lisbon, Portugal. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 285–290, Beijing, China. Mingbo Ma, Liang Huang, Bowen Zhou, and Bing Xiang. 2015. Dependency-based convolutional neural networks for sentence embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 174–179, Beijing, China. Derek Monner and James A Reggia. 2012. A generalized LSTM-like training algorithm for second-order recurrent neural networks. Neural Networks, 25:70– 83. Ahmed Hamza Osman, Naomie Salim, Mohammed Salem Binwahlan, Rihab Alteeb, and Albaraa Abuobieda. 2012. An improved plagiarism detection scheme based on semantic role labeling. Applied Soft Computing, 12(5):1493–1502. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Merin Paul and Sangeetha Jamal. 2015. An improved SRL based plagiarism detection technique using sentence ranking. Procedia Computer Science, 46:223–230. Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Semantic role chunking combining complementary syntactic views. In Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 217–220, Ann Arbor, Michigan. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. Michael Roth and Mirella Lapata. 2015. Contextaware frame-semantic role labeling. Transactions of the Association for Computational Linguistics, 3:449–460. 1201 Michael Roth and Kristian Woodsend. 2014. Composition of word representations improves semantic role labelling. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 407–413, Doha, Qatar. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pages 2951–2959, Lake Tahoe, Nevada. Vivek Srikumar and Dan Roth. 2013. Modeling semantic relations expressed by prepositions. Transactions of the Association for Computational Linguistics, 1:231–242. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1556–1566, Beijing, China. Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics, 34(2):161–191. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605. Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Modeling the translation of predicate-argument structure for smt. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 902–911, Jeju Island, Korea. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1785–1794, Lisbon, Portugal. Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 88–94, Barcelona, Spain. Be˜nat Zapirain, Eneko Agirre, Llu´ıs M`arquez, and Mihai Surdeanu. 2013. Selectional preferences for semantic role classification. Computational Linguistics, 39(3):631–663. Hai Zhao, Wenliang Chen, Jun’ichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Multilingual dependency learning: Exploiting rich features for tagging syntactic and semantic dependencies. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 61–66, Boulder, Colorado. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1127–1137, Beijing, China. 1202
2016
113
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1203–1212, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Prediction of Prospective User Engagement with Intelligent Assistants Shumpei Sano, Nobuhiro Kaji, and Manabu Sassano Yahoo Japan Corporation 9-7-1 Akasaka, Minato-ku, Tokyo 107-6211, Japan {shsano, nkaji, msassano}@yahoo-corp.jp Abstract Intelligent assistants on mobile devices, such as Siri, have recently gained considerable attention as novel applications of dialogue technologies. A tremendous amount of real users of intelligent assistants provide us with an opportunity to explore a novel task of predicting whether users will continually use their intelligent assistants in the future. We developed prediction models of prospective user engagement by using large-scale user logs obtained from a commercial intelligent assistant. Experiments demonstrated that our models can predict prospective user engagement reasonably well, and outperforms a strong baseline that makes prediction based past utterance frequency. 1 Introduction Intelligent assistants on mobile devices, such as Siri,1 have recently gained considerable attention as novel applications of dialogue technologies (Jiang et al., 2015). They receive instructions from users via voice control to execute a wide range of tasks (e.g., searching the Web, setting alarms, making phone calls, and so on). Some are able to even chat or play games with users (Kobayashi et al., 2015). Intelligent assistants possess a unique characteristic as an object of dialogue study. Popular intelligent assistants have thousands or even millions of real users, thanks to the prevalence of mobile devices. Some of those users continually use intelligent assistants for a long period of time, while others stop using them after a few trials. Such user behaviors are rarely observed in conventional experimental environments, where dialogue systems 1http://www.apple.com/ios/siri have only a small number of experimental participants who almost always continue to use the systems for the whole duration of the experiment. This paper explores a novel task of predicting whether a user will continue to use intelligent assistants in the future (This task is referred to as prospective user engagement prediction and its definition is given in Section 3). We attempt to develop such a prediction model, which would contribute to enhancing intelligent assistants in many ways. For example, if users who are likely to stop using systems can be identified, intelligent assistants can take actions to gain or maintain their interest (e.g., by sending push notifications). This task is related to, but is significantly different from, user engagement detection, which has been extensively explored in prior dialogue studies (Wang and Hirschberg, 2011; Forbes-Riley et al., 2012; Forbes-Riley and Litman, 2013; Oertel and Salvi, 2013). The prior studies attempt to predict how strongly users are currently engaged in dialogues with systems. On the other hand, the goal of this study is to predict how strongly users will be engaged with intelligent assistants in the future. The largest difference lies in whether the prediction target is user engagement at present or in the future. Also, our definition of engagement is slightly different from the prior ones. In this study, engagement is considered as a sentiment as to whether users like intelligent assistants and feel like they want to use them continually. To develop and evaluate models of prospective user engagement prediction, we exploit large-scale user logs obtained from a commercial intelligent assistant. Since monitoring users’ long-term behaviors is considered crucial for precise prediction of their prospective engagement, we tailor various features by extracting usage patterns from a long history of user dialogues. The resulting features are contrastive to those previously used for user 1203 engagement detection, in which features are basically extracted from a single user utterance. Experimental results demonstrated that our models are able to predict prospective user engagement reasonably well and are overwhelmingly better than a strong baseline that makes predictions based on past utterance frequency. We also discuss the trade-off between prediction accuracy and instancy. Specifically, we investigate how the prediction performance improves as we wait for more user dialogues to be collected. 2 Yahoo! Voice Assist This section summarizes Yahoo! Voice Assist2 (hereafter Voice Assist), a commercial intelligent assistant that is investigated in this study. Although our investigation exclusively focused on this system, we will discuss how our findings can be generalized to other intelligent assistants in Section 5.5 Table 1 illustrates example dialogues of Voice Assist users.3 As illustrated, Voice Assist offers a variety of functions to mobile users. They are largely categorized into two types: device operation Voice Assist allows users to operate mobile devices through dialogue. This includes setting alarms, making phone calls, searching the Web, launching an app, and so on (e.g., V1, V3, V4, and V5). chat Voice Assist can give greetings to, have conversations with, and play games with users (e.g., V2 and V6). In contrast to device operations for accomplishing certain tasks, these functions are offered for fun or for facilitating smooth communication. Voice Assist currently supports 66 functions (including setting alarms, the word chain game, etc.) and they can further be classified into fine-grained types, although a detailed description of them is beyond the scope of this paper. Voice Assist users can register personal profile such as their home address and birthday, with which the system makes personalized responses. For example, the home address are used to estimate users’ location when launching weather fore2http://v-assist.yahoo.co.jp (in Japanese) 3Because Voice Assist supports only Japanese, all utterances are made in Japanese. In this paper, we present English translations rather than the original Japanese to facilitate nonJapanese readers’ understanding. U1 Wake me up at 8 o’clock tomorrow. V1 OK. Set the alarm to 8 am. (show the timer configuration) U2 Good morning. V2 Good morning. Thank you for talking to me. U3 Check today’s weather. V3 Tokyo’s weather will be fine today. (launch weather forecast app) U4 From Tokyo to Kyoto. V4 A rapid express is available at 9:30. (launch transit app to show timetable) U5 What time will it arrive? V5 It will arrive at Kyoto at 11:50. (show the timetable again) U6 Let’s play the word chain game. V6 OK. Apple... Table 1: Example dialogues of Voice Assist users. U and V indicate the user and Voice Assist, respectively. The notes in parentheses represent actions that Voice Assist takes after the responses. cast apps (i.e., response V3), while knowing birthdays allows Voice Assist to send greeting messages to users on their birthdays. 3 Prospective User Engagement Prediction This section specifies the task of prospective user engagement prediction. We first explain the user log data used in our experiments. We then describe two kinds of task settings. 3.1 User log data We conducted an empirical study in which we examined Voice Assist user logs. We randomly sampled 348,295 users who used the system at least once from March 2015 to June 2015 (16 weeks) and extracted all their dialogue histories during that period. The log data included 7,472,915 utterances in total. Table 2 illustrates examples of user logs. We used the following seven attributes: user ID, nickname, birthday, time stamp, user utterance, system response, and response type. Because it is not mandatory to register the personal profiles (including nicknames, birthdays, etc.), they are sometimes missing, as indicated by N/A in the table. The response type represents the 66 functions supported by Voice Assist. The time stamps were used to segment utterances into sessions, as rep1204 ID Nickname Birthday Time Stamp Use Utterance System Response Type A John 2000-1-1 2015-3-1 23:50 Wake me up at 8 am tomorrow. OK. Set the alarm to 8 am. ALARM 2015-3-2 08:10 Good morning. Good morning. CHAT 2015-3-2 08:13 Check today’s weather. Tokyo’s weather will be fine today. WEATHER B N/A 2002-1-1 2015-3-1 08:00 From Tokyo to Kyoto. A rapid express is available at 9:30. TRANSIT 2015-3-1 08:01 What time will it arrive? It will arrive at Kyoto at 11:50. TRANSIT 2015-3-5 19:10 Let’s play the word chain game. OK. Apple... WORD CHAIN Table 2: User log examples. The dashed line represents the session boundary. resented by dashed lines in the table. We follow (Jiang et al., 2015) to define sessions as utterance sequences in which the interval of two adjacent utterances does not exceed 30 minutes. 3.2 Task definition We propose two types of prospective user engagement prediction tasks. In both tasks, we collect user dialogues from the first eight weeks of the user logs (referred to as observation period. We will discuss on length of observation period in Section 5.4), and then use those past dialogues to predict whether users are engaged with the intelligent assistant in the last eight weeks of the log data (referred to as prediction period).4 We specifically explored two prediction tasks as follows. Dropout prediction The first task is to predict whether a given user will not at all use the system in the prediction period. This task is referred to as dropout prediction and is formulated as a binary classification problem. The model of dropout prediction would allow intelligent assistants to take proactive actions against users who are likely to stop using the system. There are 71,330 dropout users, who does not at all use the system in the prediction period, among 275,630 in our data set. Engagement level prediction The second task aims at predicting how frequently the system will be used in the prediction period by a given user. Because there are outliers, or heavy users, who use the system extremely frequently (one user used the system as many as 1,099 times in the eight weeks), we do not attempt to directly predict the number of utterances or sessions. Instead, we define engagement levels as detailed below, and aim at predicting those values. The engagement levels are defined as follows. First, users are sorted in the ascending order of 4We removed users from the log data if the number of sessions was only once in the observation period, because such data lack a sufficient amount of dialogue histories for making a reliable prediction. Level # of sessions # of users 1 0 71,330 2 1–3 66,626 3 4–13 69,551 4 14– 68,123 Table 3: User distribution over the four engagement levels. The second column represents intervals of the number of sessions corresponding to the four levels. the number of sessions they made in the prediction period. We then split users into four equally-sized groups. The engagement levels of users in the four groups are defined as 1, 2, 3, and 4, respectively (Table 3). Note that a larger value of the engagement level means that the users are more engaged with the intelligent assistants. This task is referred to as engagement level prediction and is formulated as a regression problem. The engagement level prediction has different applications from the dropout prediction. For example, it would allow us to detect in advance that a user’s engagement level will change from four to three in the near future. It is beyond the scope of dropout prediction task to foresee such a change. 4 Features The dropout prediction is performed using linear support vector machine (SVM) (Fan et al., 2008), while the engagement level prediction is performed using support vector regression (SVR) (Smola and Sch¨olkopf, 2004) on the same feature set. Here, we divide the features into four categories by their function: utterance frequency features, response frequency features, time interval features, and user profile features. Table 4 lists these features. 4.1 Utterance frequency features Here, we describe the features related to utterance frequency. These features attempt to capture our 1205 #Features Name Definition 1 Utterance The number of utterances 7 UtterancewWeeks The number of utterances in recent w weeks 1 LongUtterance The number of lengthy utterances 1 UrgedUtterance The number of utterances made in response to push notifications 1 Restatement The number of restatement utterances 100 UtteranceTopici The number of utterances including words in the i-th cluster 1 Session The number of sessions 7 SessionwWeeks The number of sessions in recent w weeks 7 SessionByDay The number of sessions during each day of the week 66 Response(t) The number of responses with response type t 66 FirstResponse(t) Response(t) computed by using only the first responses in sessions 1 LongResponse The number of lengthy responses 1 ErrorMessage The number of error messages 1 MaxInterval Max days between adjacent utterances 1 MinInterval Min days between adjacent utterances 1 AvgInterval Average days between adjacent utterances 1 InactivePeriod Days from the last utterance date 66 InactivePeriod(t) InactivePeriod computed for each type of the last response 1 Nickname Whether or not a user has provided nickname information 1 Birthday Whether or not a user has provided birthday information 6 Age User’s age category Table 4: List of features. The utterance frequency features, response frequency features, and time interval features are all scaled. intuition that users who frequently use intelligent assistants are likely to be engaged with them. Utterance The number of utterances in the observation period. For scaling purposes, the value of this feature is set to log10(x+1), where x is the number of utterances. The same scaling is performed on all features but user profile features. UtterancewWeeks The number of utterances in the last w (1 ≤w < 8) weeks of the observation period. LongUtterance The number of lengthy utterances (more than 20 characters long). Jiang et al. (2015) pointed out that long utterances are prone to cause ASR errors. Since ASR errors are a factor that decreases user engagement, users who are prone to make long utterances are likely to be disengaged. UrgedUtterance The number of utterances made in response to push notifications sent from the system. We expect that engaged users tend to react to push notifications. Restatement The number of restatements made by users. Jiang et al. (2015) found that users tend to repeat previous utterances in case of ASR errors. An utterance is regarded as a restatement of the previous one if their normalized edit distance (Li and Liu, 2007) is below 0.5. UtteranceTopici The number of utterances including a keyword belonging to i-th word cluster. To induce the word clusters, 100-dimensional word embeddings are first learned from the log data using WORD2VEC (Mikolov et al., 2013)5, and then K-means clustering (K=100) is performed (MacQueen, 1967). All options of WORD2VEC are set to the default values. These features aim at capturing topics on utterances or speech acts. Table 5 illustrates example words in the clusters. For example, utterances including words in the cluster ID 36 and 63 are considered to be greeting acts and sports-related conversations, respectively. 5https://code.google.com/archive/p/ word2vec 1206 Cluster ID Example words 14 (Weather) pollen, typhoon, temperature 23 (Curse) die, stupid, shit, shurrup, dorf 36 (Greeting) thanks, good morning, hello 48 (Sentiment) funny, cute, good, awesome 63 (Sports) World cup, Nishikori, Yankees Table 5: Example words in the clusters. Cluster names (presented in parentheses) are manually provided by the authors to help readers understand the word clusters. Session The number of sessions in the observation period. SessionwWeeks The number of sessions in the last w (1 ≤w < 8) weeks of the observation period. SessionByDay The number of sessions in each day of week. There are seven different features of this type. 4.2 Response frequency features Here, we describe the features of the response frequency. Response(t) The number of system responses with response type t. FirstResponse(t) Response(t) features that are computed by using only the first responses in sessions. Our hypothesis is that first responses in sessions crucially affect user engagement. LongResponse The number of lengthy responses (more than 50 characters long). Because longer responses require a longer reading time, they are prone to irritate users and consequently decrease user engagement. ErrorMessage The number of error messages. Voice Assist returns error messages (Sorry, I don’t know.) when it fails to find appropriate responses to the user’s utterances. We consider that these error messages decrease user engagement. 4.3 Time interval features Here, we describe the features related to the session interval times. MaxInterval The maximum interval (in days) between adjacent sessions in the observation period. MinInterval The minimum interval (in days) between adjacent sessions in the observation period. AvgInterval The average interval (in days) between adjacent sessions in the observation period. InactivePeriod The time span (in days) from the last utterance to the end of the observation period. InactivePeriod(t) InactivePeriod computed separately for each type t of the last response. 4.4 User profile features Here, we describe the features of the user’s profile information. Since it is not mandotory for users to register their profiles, we expect that those who have provided profile information are likely to be engaged with the system. Nickname A binary feature representing whether or not the user has provided their nickname. Birthday A binary feature representing whether or not the user has provided their birthday. Age Six binary features representing the user’s age. They respectively indicate whether the user is less than twenty years, in their 20’s, 30’s, 40’s, or 50’s, or is more than 60 years old. Note that these features are available only if the user has provided their birthday. 5 Experiments In this section, we describe our experimental results and discuss them. 5.1 Experimental settings We randomly divided the log data into training, development, and test sets with the ratio of 8:1:1. Note that we confirmed that the users in different data sets do not overlap with each other. We trained the model with the training set and optimized hyperparameters with the development set. The test set was used for a final blind test to evaluate the learnt model. We used the LIBLINEAR tool (Fan et al., 2008) to train the SVM for the dropout prediction and 1207 Accuracy F–measure Baseline 56.8 0.482 Proposed 77.6 0.623 Utterance frequency 70.2 0.578 Response frequency 54.8 0.489 Time interval 74.6 0.617 User profile 39.9 0.406 Table 6: Classification accuracies and F–measures in the dropout prediction task. Precision Recall Baseline 0.350 0.774 Proposed 0.553 0.714 Utterance frequency 0.458 0.785 Response frequency 0.346 0.831 Time interval 0.507 0.789 User profile 0.273 0.793 Table 7: Precisions and Recalls in the dropout prediction task. the SVR for the engagement level prediction task. We optimized the C parameter on the development set. In the dropout prediction task, we used the -w option to weigh the C parameter of each class with the inverse ratio of the number of users in that class. We also used the -B option to introduce the bias term. Next, we describe the evaluation metrics. We used accuracy and F1–measure in the dropout prediction task. Mean squared error (MSE) and Spearman rank correlation coefficient were used in the engagement level prediction task. These evaluation metrics are commonly used in classification and regression tasks. We compare the proposed models with baseline method. Because we have no previous work on both tasks, we defined baseline method of our own. The baseline method was trained in the same framework as the proposed methods except that they used only Session feature. We chose Session for baseline because frequency of use features such as Session were shown predictive to similar tasks (Kloft et al., 2014; Sadeque et al., 2015) to prospective user engagement. 5.2 Results Table 6 illustrates the result of dropout prediction task. The first row compares the proposed method with the baseline. We can see that the proposed Figure 1: Accuracies per the number of sessions in the observation period of the proposed method and the baseline. The rightmost points represent the accuracy of the users whose number of sessions in the observation period are equal to or more than 40. MSE Spearman Baseline 0.784 0.595 Proposed 0.578 0.727 Utterance frequency 0.632 0.693 Response frequency 0.798 0.584 Time interval 0.645 0.692 User profile 1.231 0.146 Table 8: MSE and Spearman’s ρ in the engagement level prediction task. model outperforms the baseline. This indicates the effectiveness of our feature set. The second row illustrates the performances of the proposed method when only one feature type is used. This result suggests that the utterance frequency and time interval features are especially useful, while the combination of all types of features performs the best. We conducted McNemar test (McNemar, 1947) to investigate the significance of these improvements, and confirmed that all improvements are statistically significant (p < 0.01). Table 7 shows the precisions and the recalls of dropout prediction task. As shown in Table 7, the precision of the proposed method performs the best while the recall is worst. We consider that the performance of the precision is more important for our model because taking proactive actions against users who are likely to stop using the system is one of the assumed applications. Taking proactive actions (e.g., push notifications) against users continually using the system might irritate them and de1208 Figure 2: Correlation between the oracle engagement levels and the ones predicted by the baseline method (left) and by the proposed method (right). crease their user engagement. Therefore, the rate of the users who actually intend to stop using the system in the users predicted as dropout affects the effectiveness of these proactive actions. The result that the precision of the proposed method is 0.553 and that of the baseline is 0.350 is, in other words, using the proposed model improves the effectiveness by 20% absolute in taking these actions. Figure 1 shows the accuracies per the number of sessions in the observation period of the proposed method and the baseline. The proposed method consistently outperforms the baseline throughout the number of sessions in the observation period. In particular, the proposed method predicts well the dropout of users whose number of sessions is around five compared to the baseline. These results again indicate the effectiveness of the combination of our feature set. Table 8 shows the result of engagement level prediction task. We again observe similar trends to the dropout prediction task. The proposed method outperforms the baseline. The utterance frequency and time interval features are the most effective, while the combination of all four feature types achieves the best performance in both evaluation metrics. Figure 2 visualizes the correlation between the oracle engagement levels and the ones predicted by the baseline (left) and by the proposed method (right). We can intuitively reconfirm that the proposed method is able to predict the engagement levels reasonably well. 5.3 Investigation of feature weights We investigate weights of the features learned by the SVR for figuring out what features contribute to the precise prediction of prospective user engagement. Table 9 exemplifies features that received large weights for the four feature types. We observe that most features with large positive or negative weights are from the utterance frequency and time interval features. Those include Session, Utterance, and InactivePeriod. It is interesting to see that UrgedUtterance, which is based on an utterance type specific to mobile users, also receives a large positive weight. Further detailed analysis revealed that the proposed model captures some linguistic properties that correlate with the prospective user engagement. For example, UtteranceTopic36 and UtteranceTopic23 recieve positive and negative weights, respectively. This follows our intuition since those clusters correspond to greeting and curse words (c.f. Table 5). We also observe Response(WORD CHAIN), Response(QUIZ) (word association quiz), and Response(TRIVIA) (showing some trivia) receive positive weights. This means that playing games or showing some trivia attract users. It is interesting to see that this result is consistent with findings in (Kobayashi et al., 2015). It also follows our intuition that the weight of ErrorMessage feature is negative. 5.4 Discussion on length of observation period Next, we investigate how the length of the observation period affects the prediction performance. We varied the length of the observation periods from one to eight weeks, and evaluated the results (Figure 3). Figure 3 demonstrates that the model perfor1209 Figure 3: Results of dropout prediction (left) and engagement level prediction (right) across different observation periods (in weeks). Weight Feature 0.67 Session 0.59 Utterance 0.28 Session7Weeks 0.26 UrgedUtterance 0.02 UtteranceTopic36 -0.05 UtteranceTopic23 0.08 Response(WORD CHAIN) 0.08 Response(QUIZ) 0.04 Response(TRIVIA) -0.03 ErrorMessage -0.23 InactivePeriod(ALARM) -0.46 InactivePeriod 0.05 Birthday 0.04 Age60s Table 9: Feature weights learned by the SVR. mance generally improves as the observation period becomes longer in both tasks. When we increase the length of the observation period from one week to eight weeks, the accuracy increases by 7.9% in the dropout prediction and Spearman’s ρ increases by 4.1 point in the engagement level prediction. The most significant improvements are achieved when we increase the length from one week to two weeks in the three metrics except the F–measure. This suggests that it is generally effective to collect user dialogues of two weeks long, rather than as long as eight weeks or more. This approach would allow to make predictions promptly without waiting for user dialogues to be collected for a long time, while harming accuracy (or other evaluation metrics) as little as possible. 5.5 Application to other intelligent assistants Here, we discuss how well our approach applies to intelligent assistants other than Voice Assist. The results of this study are considered to apply to other intelligent assistants so long as user logs like the ones in Table 2 are available. The concern is that some attributes in Table 2 may not be available in other systems. In the following, we investigate two attributes, response types and profiles, that are specific to Voice Assist. We consider that response types like ours are available in user logs of many other intelligent assistants as well. Because our response types mostly correspond to commands issued when operating mobile devices, response types analogous to ours can be obtained by simply logging the commands. Alternatively, it would be possible to employ taggers like (Jiang et al., 2015) to automatically type system responses. As for profiles, it is likely that similar information is also available in many other intelligent assistants because profile registration is a common function in many IT services including intelligent assistants. For example, Cortana offers greetings and other activities on special days registered by users.6 Even if user profiles were not at all available, we consider that it would not seriously spoil the significance of this study, because our experiments revealed that user profiles are among the least predictive features. 6http://m.windowscentral.com/articles (an article posted on Dec. 5, 2015) 1210 6 Related Work Many dialogue studies have explored the issue of detecting user engagement as well as related affects such as interest and uncertainty (Wang and Hirschberg, 2011; Forbes-Riley et al., 2012; Forbes-Riley and Litman, 2013; Oertel and Salvi, 2013). As discussed in Section 1, these studies typically use a single user utterance to predict whether the user is currently engaged in dialogues with systems. We introduced a new perspective on this line of research by exploring models of predicting prospective user engagement in a largescale empirical study. Kobayashi et al. (2015) investigated how games played with intelligent assistants affect prospective user engagement. Although their research interest was prospective user engagement like ours, they exclusively studied the effect of playing game, and left other factors unexplored. In addition, they did not develop any prediction models. Recently, user satisfaction for intelligent assistants gain attention(Jiang et al., 2015; Kiseleva et al., 2016a; Kiseleva et al., 2016b). Jiang et al. (2015) proposed an automatic method of assessing user satisfaction with intelligent assistants. Kiseleva et al. extended the study of Jiang et al. for prediction (2016a) and detailed understanding (2016b) of user satisfaction with intelligent assistants. Although both satisfaction and engagement are affective states worth considering by intelligent assistants, their research goals were quite different from ours. In their studies, user satisfaction was measured as to whether intelligent assistants can accomplish predefined tasks (e.g., checking the exchange rate between US dollars and Australian dollars). This virtually assesses task-level response accuracy, which is a different notion from user engagement. Nevertheless, we consider that their studies are closely related to ours and indeed helpful for improving the proposed model. Since user satisfaction is considered to greatly affect prospective user engagement, it might be a good idea to use automatically evaluated satisfaction levels as additional features. The proposed model currently uses ErrorMessage feature as an alternative that can be implemented with ease. Several studies have investigated the chances of predicting continuous participation in SNSs such as MOOC and health care forum (Ros´e and Siemens, 2014; Kloft et al., 2014; Ramesh et al., 2014; Sadeque et al., 2015). Unlike those studies, this study exclusively investigates a specific type of dialogue system, namely intelligent assistants, and aims at uncovering usage and/or response patterns that strongly affect prospective user engagement. Consequently, many of the proposed features are specially designed to analyze intelligent assistant users rather than SNS participants. Our work also relates to the evaluation of dialogue systems. Walker et al. (1997) presented the offline evaluation framework for spoken dialog system (PARADISE). They integrate various evaluation metrics such as dialogue success and dialogue costs into one performance measure function. Although our goal is to predict prospective user engagement and different from theirs, some measures (e.g., the number of utterances) are useful to predict prospective user engagement with intelligent assistants. 7 Conclusion This paper explored two tasks of predicting prospective user engagement with intelligent assistants: dropout prediction and engagement level prediction. The experiments successfully demonstrated that reasonable performance can be archived in both tasks. Also, we examined how the length of the observation period affects prediction performance, and investigated the trade-off between prediction accuracy and instancy. The future work includes using those prediction models in a real service to take targeted actions to users who are likely to stop using intelligent assistants. References Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874. Kate Forbes-Riley and Diane Litman. 2013. When does disengagement correlate with performance in spoken dialog computer tutoring? International Journal of Artificial Intelligence in Education, 22(12):39–58. Kate Forbes-Riley, Diane Litman, Heather Friedberg, and Joanna Drummond. 2012. Intrinsic and extrinsic evaluation of an automatic user disengagement detector for an uncertainty-adaptive spoken dialogue system. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 91–102. Association for Computational Linguistics. 1211 Jiepu Jiang, Ahmed Hassan Awadallah, Rosie Jones, Umut Ozertem, Imed Zitouni, Ranjitha Gurunath Kulkarni, and Omar Zia Khan. 2015. Automatic online evaluation of intelligent assistants. In Proceedings of the 24th International Conference on World Wide Web, pages 506–516. International World Wide Web Conferences Steering Committee. Julia Kiseleva, Kyle Williams, Jiepu Jiang, Ahmed Hassan Awadallah, Imed Zitouni, Aidan C Crook, and Tasos Anastasakos. 2016a. Predicting user satisfaction with intelligent assistants. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 495–505. ACM. Julia Kiseleva, Kyle Williams, Jiepu Jiang, Ahmed Hassan Awadallah, Aidan C Crook, Imed Zitouni, and Tasos Anastasakos. 2016b. Understanding user satisfaction with intelligent assistants. In Proceedings of the 2016 ACM SIGIR Conference on Human Information Interaction and Retrieval, pages 121– 130. ACM. Marius Kloft, Felix Stiehler, Zhilin Zheng, and Niels Pinkwart. 2014. Predicting MOOC dropout over weeks using machine learning methods. In Proceedings of the 2014 Empirical Methods in Natural Language Processing, pages 60–65. Association for Computational Linguistics. Hayato Kobayashi, Kaori Tanio, and Manabu Sassano. 2015. Effects of game on user engagement with spoken dialogue system. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 422–426. Association for Computational Linguistics. Yujian Li and Bo Liu. 2007. A normalized Levenshtein distance metric. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(6):1091–1095. James MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematics, Statistics and Probability, Vol. 1, pages 281–297. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Catharine Oertel and Giampiero Salvi. 2013. A gazebased method for relating group involvement to individual engagement in multimodal multiparty dialogue. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction, pages 99–106. ACM. Arti Ramesh, Dan Goldwasser, Bert Huang, Hal Daum´e III, and Lise Getoor. 2014. Learning latent engagement patterns of students in online courses. In Proceedings of the Twenty-Eighth AAAI Conference Artificial Intelligence, pages 1272–1278. Association for the Advancement of Artificial Intelligence. Carolyn Ros´e and George Siemens. 2014. Shared task on prediction of dropout over time in massively open online courses. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing Workshop on Modeling Large Scale Social Interaction in Massively Open Online Courses, pages 39–41. Association for Computational Linguistics. Farig Sadeque, Thamar Solorio, Ted Pedersen, Prasha Shrestha, and Steven Bethard. 2015. Predicting continued participation in online health forums. In Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis, pages 12–20. Association for Computational Linguistics. Alex J. Smola and Bernhard Sch¨olkopf. 2004. A tutorial on support vector regression. Statistics and Computing, 14(3):199–222. Marilyn A Walker, Diane J Litman, Candace A Kamm, and Alicia Abella. 1997. Paradise: A framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics, pages 271–280. Association for Computational Linguistics. William Yang Wang and Julia Hirschberg. 2011. Detecting levels of interest from spoken dialog with multistream prediction feedback and similarity based hierarchical fusion learning. In Proceedings of the 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 152–161. Association for Computational Linguistics. 1212
2016
114
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1213–1223, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Resolving References to Objects in Photographs using the Words-As-Classifiers Model David Schlangen Sina Zarrieß Casey Kennington Dialogue Systems Group // CITEC // Faculty of Linguistics and Literary Studies Bielefeld University, Germany [email protected] Abstract A common use of language is to refer to visually present objects. Modelling it in computers requires modelling the link between language and perception. The “words as classifiers” model of grounded semantics views words as classifiers of perceptual contexts, and composes the meaning of a phrase through composition of the denotations of its component words. It was recently shown to perform well in a game-playing scenario with a small number of object types. We apply it to two large sets of real-world photographs that contain a much larger variety of object types and for which referring expressions are available. Using a pre-trained convolutional neural network to extract image region features, and augmenting these with positional information, we show that the model achieves performance competitive with the state of the art in a reference resolution task (given expression, find bounding box of its referent), while, as we argue, being conceptually simpler and more flexible. 1 Introduction A common use of language is to refer to objects in the shared environment of speaker and addressee. Being able to simulate this is of particular importance for verbal human/robot interfaces (HRI), and the task has consequently received some attention in this field (Matuszek et al., 2012; Tellex et al., 2011; Krishnamurthy and Kollar, 2013). Here, we study a somewhat simpler precursor task, namely that of resolution of reference to objects in static images (photographs), but use a larger set of object types than is usually done in HRI work (> 300, see below). More formally, the task is to retrieve, given a referring expression e and an image I, the region bb∗of the image that is most likely to contain the referent of the expression. As candidate regions, we use both manually annotated regions as well as automatically computed ones. As our starting point, we use the “words-asclassifiers” model recently proposed by Kennington and Schlangen (2015). It has before only been tested in a small domain and with specially designed features; here, we apply it to real-world photographs and use learned representations from a convolutional neural network (Szegedy et al., 2015). We learn models for between 400 and 1,200 words, depending on the training data set. As we show, the model performs competitive with the state of the art (Hu et al., 2016; Mao et al., 2016) on the same data sets. Our background interest in situated interaction makes it important for us that the approach we use is ‘dialogue ready’; and it is, in the sense that it supports incremental processing (giving results while the incoming utterance is going on) and incremental learning (being able to improve performance from interactive feedback). However, in this paper we focus purely on ‘batch’, noninteractive performance.1 2 Related Work The idea of connecting words to what they denote in the real world via perceptual features goes back at least to Harnad (1990), who coined “The Symbol Grounding Problem”: “[H]ow can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads?” The pro1The code for reproducing the results reported in this paper can be found at https://github.com/ dsg-bielefeld/image_wac. 1213 posed solution was to link ‘categorial representations’ with “learned and innate feature detectors that pick out the invariant features of object and event categories from their sensory projections”. This suggestion has variously been taken up in computational work. An early example is Deb Roy’s work from the early 2000s (Roy et al., 2002; Roy, 2002; Roy, 2005). In (Roy et al., 2002), computer vision techniques are used to detect object boundaries in a video feed, and to compute colour features (mean colour pixel value), positional features, and features encoding the relative spatial configuration of objects. These features are then associated in a learning process with certain words, resulting in an association of colour features with colour words, spatial features with prepositions, etc., and based on this, these words can be interpreted with reference to the scene currently presented to the video feed. Of more recent work, that of Matuszek et al. (2012) is closely related to the approach we take. The task in this work is to compute (sets of) referents, given a (depth) image of a scene containing simple geometric shapes and a natural language expression. In keeping with the formal semantics tradition, a layer of logical form representation is assumed; it is not constructed via syntactic parsing rules, however, but by a learned mapping (semantic parsing). The non-logical constants of this representation then are interpreted by linking them to classifiers that work on perceptual features (representing shape and colour of objects). Interestingly, both mapping processes are trained jointly, and hence the links between classifiers and nonlogical constants on the one hand, and non-logical constants and lexemes on the other are induced from data. In the work presented here, we take a simpler approach that forgoes the level of semantic representation and directly links lexemes and perceptions, but does not yet learn the composition. Most closely related on the formal side is recent work by Larsson (2015), which offers a very direct implementation of the ‘words as classifiers’ idea (couched in terms of type theory with records (TTR; (Cooper and Ginzburg, 2015)) and not model-theoretic semantics). In this approach, some lexical entries are enriched with classifiers that can judge, given a representation of an object, how applicable the term is to it. The paper also describes how these classifiers could be trained (or adapted) in interaction. The model is only specified theoretically, however, with hand-crafted classifiers for a small set of words, and not tested with real data. The second area to mention here is the recently very active one of image-to-text generation, which has been spurred on by the availability of large datasets and competitions structured around them. The task here typically is to generate a description (a caption) for a given image. A frequently taken approach is to use a convolutional neural network (CNN) to map the image to a dense vector (which we do as well, as we will describe below), and then condition a neural language model (typically, an LSTM) on this to produce an output string (Vinyals et al., 2015; Devlin et al., 2015). Fang et al. (2015) modify this approach somewhat, by using what they call “word detectors” first to specifically propose words for image regions, out of which the caption is then generated. This has some similarity to our word models as described below, but again is tailored more towards generation. Socher et al. (2014) present a more compositional variant of this type of approach where sentence representations are composed along the dependency parse of the sentence. The representation of the root node is then mapped into a multimodal space in which distance between sentence and image representation can be used to guide image retrieval, which is the task in that paper. Our approach, in contrast, composes on the level of denotations and not that of representation. Two very recent papers carry this type of approach over to the problem of resolving references to objects in images. Both (Hu et al., 2015) and (Mao et al., 2015) use CNNs to encode image information (and interestingly, both combine, in different ways, information from the candidate region with more global information about the image as a whole), on which they condition an LSTM to get a prediction score for fit of candidate region and referring expression. As we will discuss below, our approach has some similarities, but can be seen as being more compositional, as the expression score is more clearly composed out of individual word scores (with rule-driven composition, however). We will directly compare our results to those reported in these papers, as we were able to use the same datasets. 1214 3 The “Words-As-Classifiers” Model We now briefly review (and slightly reformulate) the model introduced by Kennington and Schlangen (2015). It has several components: A Model of Word Meanings Let w be a word whose meaning is to be modelled, and let x be a representation of an object in terms of its visual features. The core ingredient then is a classifier then takes this representation and returns a score fw(x), indicating the “appropriateness” of the word for denoting the object. Noting a (loose) correspondence to Montague’s (1974) intensional semantics, where the intension of a word is a function from possible worlds to extensions (Gamut, 1991), the intensional meaning of w is then defined as the classifier itself, a function from a representation of an object to an “appropriateness score”:2 [[w]]obj = λx.fw(x) (1) (Where [[.]] is a function returning the meaning of its argument, and x is a feature vecture as given by fobj, the function that computes the representation for a given object.) The extension of a word in a given (here, visual) discourse universe W can then be modelled as a probability distribution ranging over all candidate objects in the given domain, resulting from the application of the word intension to each object (xi is the feature vector for object i, normalize() vectorized normalisation, and I a random variable ranging over the k candidates): [[w]]W obj = normalize(([[w]]obj(x1), . . . , [[w]]obj(xk))) = normalize((fw(x1), . . . , fw(xk))) = P(I|w) (2) Composition Composition of word meanings into phrase meanings in this approach is governed by rules that are tied to syntactic constructions. In the following, we only use simple multiplicative composition for nominal constructions: [[[nomw1, . . . , wk]]]W = [[NOM]]W [[w1, . . . , wk]]W = ◦/N ([[w1]]W , . . . , [[wk]]W ) (3) where ◦/N is defined as ◦/N ([[w1]]W , . . . , [[wk]]W ) = P◦(I|w1, . . . , wk) with P◦(I = i|w1, . . . , wk) = 1 Z (P(I = i|w1) ∗· · · ∗P(I = i|wk)) for i ∈I (4) (Z takes care that the result is normalized over all candidate objects.) 2(Larsson, 2015) develops this intension/extension distinction in more detail for his formalisation. Selection To arrive at the desired extension of a full referring expression—an individual object, in our case—, one additional element is needed, and this is contributed by the determiner. For uniquely referring expressions (“the red cross”), what is required is to pick the most likely candidate from the distribution: [[the]] = λx. arg max Dom(x) x (5) [[[the] [nomw1, . . . , wk]]]W = arg max i∈W [ [[[nomw1, . . . , wk]]]W ] (6) In other words, the prediction of an expression such as “the brown shirt guy on right” is computed by first getting the responses of the classifiers corresponding to the words, individually for each object. I.e., the classifier for “brown” is applied to objects o1, ..., on. This yields a vector of responses (of dimensionality n, the number of candidate objects); similarly for all other words. These vectors are then multiplied, and the predicted object is the maximal component of the resulting vector. Figure 1 gives a schematic overview of the model as implemented here, including the feature extraction process. E¥¥.⇒t Image with bounding 
 boxes of objects (R, R’, …) Regions converted to 224 x 224 pixels R … Forward pass through convolutional neural network w1 w2 y1 y’1 … R’ … ⊙ y2 y’2 … … ⊙ … … … … Application 
 of word classifiers;
 composition of responses x x’ 1024 dim. vector Region position & size (7 feat.s) Figure 1: Overview of the model 4 Data: Images & Referring Expressions SAIAPR TC-12 / ReferItGame The basis of this data set is the IAPR TC-12 image retrieval benchmark collection of “20,000 still natural images taken from locations around the world and comprising an assorted cross-section of still natural images” (Grubinger et al., 2006). A typical example of an image from the collection is shown in Figure 2 on the left. This dataset was later augmented by Escalante et al. (2010) with segmentation masks identifying objects in the images (an average of 5 objects per image). Figure 2 (middle) gives an example of such a segmentation. These segmentations were 1215 Figure 2: Image 27437 from IAPR TC-12 (left), with region masks from SAIAPR TC-12 (middle); “brown shirt guy on right” is a referring expression in REFERITGAME for the region singled out on the right done manually and provide close maskings of the objects. This extended dataset is also known as “SAIAPR TC-12” (for “segmented and annotated IAPR TC-12”). The third component is provided by Kazemzadeh et al. (2014), who collected a large number of expressions referring to (presegmented) objects from these images, using a crowd-sourcing approach where two players were paired and a director needed to refer to a predetermined object to a matcher, who then selected it. (An example is given in Figure 2 (right).) This corpus contains 120k referring expressions, covering nearly all of the 99.5k regions from SAIAPR TC-12.3 The average length of a referring expression from this corpus is 3.4 tokens. The 500k token realise 10,340 types, with 5785 hapax legomena. The most frequent tokens (other than articles and prepositions) are “left” and “right”, with 22k occurrences. (In the following, we will refer to this corpus as REFERIT.) This combination of segmented images and referring expressions has recently been used by Hu et al. (2015) for learning to resolve references, as we do here. The authors also tested their method on region proposals computed using the EdgeBox algorithm (Zitnick and Doll´ar, 2014). They kindly provided us with this region proposal data (100 best proposals per image), and we compare our results on these region proposals with theirs below. The authors split the dataset evenly into 10k images (and their corresponding referring expressions) for training and 10k for testing. As we needed more training data, we made a 90/10 split, ensuring that all our test images are from their test split. 3The IAPR TC-12 and SAIAPR TC-12 data is available from http://imageclef.org; REFERITGAME from http://tamaraberg.com/referitgame. MSCOCO / GoogleRefExp / ReferItGame The second dataset is based on the “Microsoft Common Objects in Context” collection (Lin et al., 2014), which contains over 300k images with object segmentations (of objects from 80 prespecified categories), object labels, and image captions. Figure 3 shows some examples of images containing objects of type “person”. This dataset was augmented by Mao et al. (2015) with what they call ‘unambiguous object descriptions’, using a subset of 27k images that contained between 2 and 4 instances of the same object type within the same image. The authors collected and validated 100k descriptions in a crowd-sourced approach as well, but unlike in the ReferItGame setup, describers and validators were not connected live in a game setting.4 The average length of the descriptions is 8.3 token. The 790k token in the corpus realise 14k types, with 6950 hapax legomena. The most frequent tokens other than articles and prepositions are “man” (15k occurrences) and “white” (12k). (In the following, we will refer to this corpus as GREXP.) The authors also computed automatic region proposals for these images, using the multibox method of Erhan et al. (2014) and classifying those using a model trained on MSCOCO categories, retaining on average only 8 per image. These region proposals are on average of a much higher quality than those we have available for the other dataset. As mentioned in (Mao et al., 2015), Tamara Berg and colleagues have at the same time used their ReferItGame paradigm to collect referring expressions for MSCOCO images as well. Upon request, Berg and colleagues also kindly provided us with this data—140k referring expressions, for 20k images, average length 3.5 token, 500k token altogether, 10.3k types, 5785 hapax legomena; most frequent also “left” (33k occurrences) 4The data is available from https://github.com/ mjhucla/Google_Refexp_toolbox. 1216 REFCOCO: green woman GREXP: a woman wearing blue jeans REFCOCO: person left GREXP: a boy is ready to play who is wearing green color pant Figure 3: Examples from MSCOCO and “right” (32k). (In the following, we will call this corpus REFCOCO.) In our experiments, we use the training/validation/test splits on the images suggested by Berg et al., as the splits provided by Mao et al. (2015) are on the level of objects and have some overlap in images. It is interesting to note the differences in the expressions from REFCOCO and GREXP, the latter on average being almost 5 token longer. Figure 3 gives representative examples. We can speculate that the different task descriptions (“refer to this object” vs. “produce an unambiguous description”) and the different settings (live to a partner vs. offline, only validated later) may have caused this. As we will see below, the GREXP descriptions did indeed cause more problems to our approach, which is meant for reference in interaction. 5 Training the Word/Object Classifiers The basis of the approach we use are the classifiers that link words and images. These need to be trained from data; more specifically, from pairings of image regions and referring expressions, as provided by the corpora described in the previous section. Representing Image Regions The first step is to represent the information from the image regions. We use a deep convolutional neural network, “GoogLeNet” (Szegedy et al., 2015), that was trained on data from the Large Scale Visual Recognition Challenge 2014 (ILSVRC2014) from the ImageNet corpus (Deng et al., 2009) to extract features.5 It was optimised to recognise categories from that challenge, which are different from those occurring in either SAIAPR or COCO, but in any case we only use the final fully-connected layer before the classification layer, to give us a 1024 dimensional representation of the region. We augment this with 7 features that encode information about the region relative to the image: the (relative) coordinates of two corners, its (relative) area, distance to the center, and orientation of the image. The full representation hence is a vector of 1031 features. (See also Figure 1 above.) Selecting candidate words How do we select the words for which we train perceptual classifiers? There is a technical consideration to be made here and a semantic one. The technical consideration is that we need sufficient training data for the classifiers, and so can only practically train classifiers for words that occur often enough in the training corpus. We set a threshold here of a minimum of 40 occurences in the training corpus, determined empirically on the validation set to provide a good tradeoff between vocabulary coverage and number of training instances. The semantic consideration is that intuitively, the approach does not seem appropriate for all types of words; where it might make sense for attributes and category names to be modelled as image classifiers, it does less so for prepositions and other function words. Nevertheless, for now, we make the assumption that all words in a referring expression contribute information to the visual identification of its referent. We discuss the consequences of this decision below. This assumption is violated in a different way in phrases that refer via a landmark, such as in “the thing next to the woman with the blue shirt”. Here we cannot assume for example that the referent region provides a good instance of “blue” (since it is not the target object in the region that is described as blue), and so we exclude such phrases from the training set (by looking for a small set of expressions such as “left of”, “behind”, etc.; see appendix for a full list). This reduces the train5http://www.image-net.org/challenges/ LSVRC/2014/. We use the sklearn-theano (http://sklearn-theano. github.io/feature_extraction/index.html# feature-extraction) port of the Caffe replication and re-training (https://github.com/BVLC/caffe/ tree/master/models/bvlc_googlenet) of this network structure. 1217 ing portions of REFERIT, REFCOCO and GREXP to 86%, 95%, and 82% of their original size, respectively (counting referring expressions, not tokens). Now that we have decided on the set of words for which to train classifiers, how do we assemble the training data? Positive Instances Getting positive instances from the corpus is straightforward: We pair each word in a referring expression with the representation of the region it refers to. That is, if the word “left” occurs 20,000 times in expressions in the training corpus, we have 20,000 positive instances for training its classifier. Negative Instances Acquiring negative instances is less straightforward. The corpus does not record inappropriate uses of a word, or ‘negative referring expressions’ (as in “this is not a red chair”). To create negative instances, we make a second assumption which again is not generally correct, namely that when a word was never in the corpus used to refer to an object, this object can serve as a negative example for that word/object classifier. In the experiments reported below, we randomly selected 5 image regions from the training corpus whose referring expressions (if there were any) did not contain the word in question.6 The classifiers Following this regime, we train binary logistic regression classifiers (with ℓ1 regularisation) on the visual object features representations, for all words that occurred at least 40 times in the respective training corpus.7 To summarise, we train separate binary classifiers for each word (not making any a-priori distinction between function words and others, or attribute labels and category labels), giving them the task to predict how likely it would be that the word they represent would be used to refer to the image region they are given. All classifiers are presented during training with data sets with the same balance of positive and negative examples (here, a fixed ratio of 1 positive to 5 negative). Hence, the classifiers themselves do not reflect any word frequency effects; our claim (to be validated in future 6This approach is inspired by the negative sampling technique of Mikolov et al. (2013) for training textual word embeddings. 7We used the implementation in the scikit learn package (Pedregosa et al., 2011). %tst acc mrr arc >0 acc REFERIT 1.00 0.65 0.79 0.89 0.97 0.67 REFERIT; NR 0.86 0.68 0.82 0.91 0.97 0.71 (Hu et al., 2015) – 0.73 – – – – REFCOCO 1.00 0.61 0.77 0.91 0.98 0.62 REFCOCO; NR 0.94 0.63 0.78 0.92 0.98 0.64 (Mao et al., 2015) – 0.70 – – – – GREXP 1.00 0.43 0.65 0.86 1.00 0.43 GREXP; NR 0.82 0.45 0.67 0.88 1.00 0.45 (Mao et al., 2015) – 0.61 – – – – Table 1: Results; separately by corpus. See text for description of columns and rows. work) is that any potential effects of this type are better modelled separately. 6 Experiments The task in our experiments is the following: Given an image I together with bounding boxes of regions (bb1, . . . , bbn) within it, and a referring expression e, predict which of these regions contains the referent of the expression. By Corpus We start with training and testing models for all three corpora (REFERIT, REFCOCO, GREXP) separately. But first, we establish some baselines. The first is just randomly picking one of the candidate regions. The second is a 1-rule classifier that picks the largest region. The respective accuracies on the corpora are as follows: REFERIT 0.20/0.19; REFCOCO 0.16/0.23; GREXP 0.19/0.20. Training on the training sets of REFERIT, REFCOCO and GREX with the regime described above (min. 40 occurrences) gives us classifiers for 429, 503, and 682 words, respectively. Table 1 shows the evaluation on the respective test parts: accuracy (acc), mean reciprocal rank (mrr) and for how much of the expression, on average, a word classifier is present (arc). ‘>0’ shows how much of the testcorpus is left if expressions are filtered out for which not even a single word is the model (which we evaluate by default as false), and accuracy for that reduced set. The ‘NR’ rows give the same numbers for reduced test sets in which all relational expressions have been removed; ‘%tst’ shows how much of a reduction that is relative to the full testset. The rows with the citations give the best reported results from the literature.8 As this shows, in most cases we come close, but do not quite reach these results. The distance is the biggest for GREXP with its much longer expressions. As discussed above, not only are the descriptions longer on average in this corpus, the 8Using a different split than (Mao et al., 2015), as their train/test set overlaps on the level of images. 1218 vocabulary size is also much higher. Many of the descriptions contain action descriptions (“the man smiling at the woman”), which do not seem to be as helpful to our model. Overall, the expressions in this corpus do appear to be more like ‘mini-captions’ describing the region rather than referring expressions that efficiently single it out among the set of distractors; our model tries to capture the latter. Combining Corpora A nice effect of our setup is that we can freely mix the corpora for training, as image regions are represented in the same way regardless of source corpus, and we can combine occurrences of a word across corpora. We tested combining the testsets of REFERIT and REFCOCO (RI+RC in the Table below), REFCOCO and GREXP (RC+GR), and all three (REFERIT, REFCOCO, and GREXP; RI+RC+GR), yielding models for 793, 933, 1215 words, respectively (with the same “min. 40 occurrences” criterion). For all testsets, the results were at least stable compared to Table 1, for some they improved. For reasons of space, we only show the improvements here. %tst acc mrr arc >0 acc RI+RC/RC 1.00 0.63 0.78 0.92 0.98 0.64 RI+RC/RC; NR 0.94 0.65 0.79 0.93 0.98 0.66 RI+RC+GR/RC 1.00 0.63 0.78 0.94 0.99 0.64 RI+RC+GR/RC; NR 0.94 0.65 0.79 0.95 0.99 0.66 RI+RC+GR/GR 1.00 0.47 0.68 0.90 1.00 0.47 RI+RC+GR/GR; NR 0.82 0.49 0.70 0.91 1.00 0.49 Table 2: Results, combined corpora Computed Region Proposals Here, we cannot expect the system to retrieve exactly the ground truth bounding box, since we cannot expect the set of automatically computed regions to contain it. We follow Mao et al. (2015) in using intersection over union (IoU) as metric (the size of the intersective area between candidate and ground truth bounding box normalised by the size of the union) and taking an IoU ≥0.5 of the top candidate as a threshold for success (P@1). As a more relaxed metric, we also count for the SAIAPR proposals (of which there are 100 per image) as success when at least one among the top 10 candidates exceeds this IoU threshold (R@10). (For MSCOCO, there are only slightly above 5 proposals per image on average, so computing this more relaxed measure does not make sense.) The random baseline (RND) is computed by applying the P@1 criterion to a randomly picked region proposal. (That it is higher than 1/#regions for SAIAPR shows that the regions cluster around objects.) RP@1 RP@10 rnd REFERIT 0.09 0.24 0.03 REFERIT; NR 0.10 0.26 0.03 (Hu et al., 2015) 0.18 0.45 REFCOCO 0.52 – 0.17 REFCOCO; NR 0.54 – 0.17 (Mao et al., 2015) 0.52 GREXP 0.36 – 0.16 GREXP; NR 0.37 – 0.17 (Mao et al., 2015) 0.45 Table 3: Results on region proposals With the higher quality proposals provided for the MSCOCO data, and the shorter, more prototypical referring expressions from REFCOCO, we narrowly beat the reported results. (Again, note that we use a different split that ensures separation on the level of images between training and test.) (Hu et al., 2015) performs relatively better on the region proposals (the gap is wider), on GREXP, we come relatively closer using these proposals. We can speculate that using automatically computed boxes of a lower selectivity (REFERIT) shifts the balance between needing to actually recognise the image and getting information from the shape and position of the box (our positional features; see Section 5). Ablation Experiments To get an idea about what the classifiers actually pick up on, we trained variants given only the positional features (POS columns below in Table 4) and only object features (NOPOS columns). We also applied a variant of the model with only the top 20 classifiers (in terms of number of positive training examples; TOP20). We only show accuracy here, and repeat the relevant numbers from Table 1 for comparison (FULL). nopos pos full top20 RI 0.53 0.60 0.65 0.46 RI; NR 0.56 0.62 0.68 0.48 RC 0.44 0.55 0.61 0.52 RC; NR 0.45 0.57 0.63 0.53 Table 4: Results with reduced models This table shows an interesting pattern. To a large extent, the object image features and the positional features seem to carry redundant information, with the latter on their own performing better than the former on their own. The full model, however, still gains something from the combination 1219 of the feature types. The top-20 classifiers (and consequently, top 20 most frequent words) alone reach decent performance (the numbers are shown for the full test set here; if reduced to only utterances where at least one word is known, the numbers rise, but the reduction of the testset is much more severe than for the full models with much larger vocabulary). 1 2 3 4 5 6 7 8 9 10 0.0 0.2 0.4 0.6 0.8 1.0 referit refcoco grexp %t,ri %t, rc %t, gr Figure 4: Accuracy by expression length (top 3 lines); percentage of expressions with this length (lower 3 lines). Error Analysis Figure 4 shows the accuracy of the model split by length of the referring expression (top lines; lower lines show the proportion of expression of this length in the whole corpus). The pattern is similar for all corpora (but less pronounced for GREXP): shorter utterances fare better. Manual inspection of the errors made by the system further corroborates the suspicion that composition as done here neglects too much of the internal structure of the expression. An example from REFERIT where we get a wrong prediction is “second person from left”. The model clearly does not have a notion of counting, and here it wrongly selects the leftmost person. In a similar vein, we gave results above for a testset where spatial relations where removed, but other forms of relation (e.g., “child sitting on womans lap”) that weren’t modelled still remain in the corpus. We see as an advantage of the model that we can inspect words individually. Given the performance of short utterances, we can conclude that the word/object classifiers themselves perform reasonably well. This seems to be somewhat independent of the number of training examples they received. Figure 5 shows, for REFERIT, # training instances (x-axis) vs. average accuracy on the validation set, for the whole vocabulary. As this shows, the classifiers tend to get better with more training instances, but there are good ones even with very little training material. Figure 5: Average accuracy vs. # train instanc. Mean average precision (i.e., area under the precision / recall curve) over all classifiers (exemplarily computed for the RI+RC set, 793 words) is 0.73 (std 0.15). Interestingly, the 155 classifiers in the top range (average precision over 0.85) are almost all for concrete nouns; the 128 worst performing ones (below 0.60) are mostly other parts of speech. (See appendix.) This is, to a degree, as expected: our assumption behind training classifiers for all ocurring words and not pre-filtering based on their part-of-speech or prior hypotheses about visual relevance was that words that can occur in all kinds of visual contexts will lead to classifiers whose contributions cancel out across all candidate objects in a scene. However, the mean average precision of the classifiers for colour words is also relatively low at 0.6 (std 0.08), for positional words (“left”, “right”, “center”, etc.) it is 0.54 (std 0.1). This might suggest that the features we take from the CNN might indeed be more appropriate for tasks close to what they were originally trained on, namely category and not attribute prediction. We will explore this in future work. 7 Conclusions We have shown that the “words-as-classifiers” model scales up to a larger set of object types with a much larger variety in appearance (SAIAPR and MSCOCO); to a larger vocabulary and much less restricted expressions (REFERIT, REFCOCO, GREXP); and to use of automatically learned feature types (from a CNN). It achieves results that are comparable to those of more complex models. We see as advantage that the model we use is “transparent” and modular. Its basis, the word/object classifiers, ties in more directly with 1220 more standard approaches to semantic analysis and composition. Here, we have disregarded much of the internal structure of the expressions. But there is a clear path for bringing it back in, by defining other composition types for other construction types and different word models for other word types. Kennington and Schlangen (2015) do this for spatial relations in their simpler domain; for our domain, new and more richly annotated data such as VISUALgenome looks promising for learning a wide variety of relations.9 The use of denotations / extensions might make possible transfer of methods from extensional semantics, e.g. for the addition of operators such as negation or generalised quantifiers. The design of the model, as mentioned in the introduction, makes it amenable for use in interactive systems that learn; we are currently exploring this avenue. Lastly, the word/object classifiers also show promise in the reverse task, generation of referring expressions (Zarrieß and Schlangen, 2016). All this is future work. In its current state— besides, we believe, strongly motivating this future work—, we hope that the model can also serve as a strong baseline to other future approaches to reference resolution, as it is conceptually simple and easy to implement. Acknowledgments We thank Hu et al. (2016), Mao et al. (2016) and Tamara Berg for giving us access to their data. Thanks are also due to the anonymous reviewers for their very insightful comments. We acknowledge support by the Cluster of Excellence “Cognitive Interaction Technology” (CITEC; EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG), and by the DUEL project, also funded by DFG (grant SCHL 845/5-1). References Robin Cooper and Jonathan Ginzburg. 2015. Type theory with records for natural language semantics. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantic Theory 2nd edition. WileyBlackwell. Jia Deng, W. Dong, Richard Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09. 9http://visualgenome.org/ Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 100–105, Beijing, China, July. Association for Computational Linguistics. Dumitru Erhan, Christian Szegedy, Alexander Toshev, and Dragomir Anguelov. 2014. Scalable Object Detection Using Deep Neural Networks. 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 2155–2162. Hugo Jair Escalante, Carlos a. Hern´andez, Jesus a. Gonzalez, a. L´opez-L´opez, Manuel Montes, Eduardo F. Morales, L. Enrique Sucar, Luis Villase˜nor, and Michael Grubinger. 2010. The segmented and annotated IAPR TC-12 benchmark. Computer Vision and Image Understanding, 114(4):419–428. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John Platt, Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In Proceedings of CVPR, Boston, MA, USA, June. IEEE. L. T. F. Gamut. 1991. Logic, Language and Meaning: Intensional Logic and Logical Grammar, volume 2. Chicago University Press, Chicago. Michael Grubinger, Paul Clough, Henning M¨uller, and Thomas Deselaers. 2006. The IAPR TC-12 benchmark: a new evaluation resource for visual information systems. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2006), pages 13–23, Genoa, Italy. Stevan Harnad. 1990. The symbol grounding problem. Physica D, 42:335–346. Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2015. Natural language object retrieval. ArXiv / CoRR, abs/1511.04164. Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Natural language object retrieval. In Proceedings of CVPR 2016, Las Vegas, USA, June. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L Berg. 2014. ReferItGame: Referring to Objects in Photographs of Natural Scenes. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 787–798, Doha, Qatar. Casey Kennington and David Schlangen. 2015. Simple learning and compositional application of perceptually grounded word meanings for incremental 1221 reference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 292–301, Beijing, China, July. Association for Computational Linguistics. Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connecting natural language to the physical world. Transactions of the Association for Computational Linguistics, 1:193–206. Staffan Larsson. 2015. Formal semantics for perceptual classification. Journal of logic and computation, 25(2):335–369. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollr, and C.Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision ECCV 2014, volume 8693, pages 740–755. Springer International Publishing. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2015. Generation and comprehension of unambiguous object descriptions. ArXiv / CoRR, abs/1511.02283. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In Proceedings of CVPR 2016, Las Vegas, USA, June. Cynthia Matuszek, Nicholas Fitzgerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2012. A Joint Model of Language and Perception for Grounded Attribute Learning. In Proceedings of the International Conference on Machine Learning (ICML 2012). Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013 (NIPS 2013), pages 3111– 3119, Lake Tahoe, Nevada, USA. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Deb Roy, Peter Gorniak, Niloy Mukherjee, and Josh Juster. 2002. A trainable spoken language understanding system for visual object selection. In Proceedings of the International Conference on Speech and Language Processing 2002 (ICSLP 2002), Colorado, USA. Deb K. Roy. 2002. Learning visually-grounded words and syntax for a scene description task. Computer Speech and Language, 16(3). Deb Roy. 2005. Grounding words in perception and action: Computational insights. Trends in Cognitive Sciene, 9(8):389–396. Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded Compositional Semantics for Finding and Describing Images with Sentences. Transactions of the ACL (TACL). Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In CVPR 2015, Boston, MA, USA, June. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation. In AAAI Conference on Artificial Intelligence, pages 1507–1514. Richmond H. Thomason, editor. 1974. Formal Philosophy: Selected Papers of Richard Montague. Yale University Press, New Haven and London. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition. Sina Zarrieß and David Schlangen. 2016. Easy things first: Installments improve referring expression generation for objects in photographs. In Proceedings of ACL 2016, Berlin, Germany, August. C Lawrence Zitnick and Piotr Doll´ar. 2014. Edge boxes: Locating object proposals from edges. In Computer Vision–ECCV 2014, pages 391–405. Springer. A Supplemental Material Filtering relational expressions As described above, we filter out all referring expressions during training that contain either of the following tokens: RELWORDS = [’below’, ’above’, ’between’, ’not’, ’behind’, ’under’, ’underneath’, ’front of’, ’right of’, ’left of’, ’ontop of’, ’next to’, ’middle of’] 1222 Average Precision See Section 6. Classifiers with average precision over 0.85: [’giraffe’, ’coffee’, ’court’, ’riding’, ’penguin’, ’balloon’, ’ball’, ’mug’, ’turtle’, ’tennis’, ’beer’, ’seal’, ’cow’, ’bird’, ’horse’, ’drink’, ’koala’, ’sheep’, ’ceiling’, ’parrot’, ’bike’, ’cactus’, ’sun’, ’smoke’, ’llama’, ’fruit’, ’ruins’, ’waterfall’, ’nightstand’, ’books’, ’night’, ’coke’, ’skirt’, ’leaf’, ’wheel’, ’label’, ’pot’, ’animals’, ’cup’, ’tablecloth’, ’pillar’, ’flag’, ’field’, ’monkey’, ’bowl’, ’curtain’, ’plate’, ’van’, ’surfboard’, ’bottle’, ’fish’, ’umbrella’, ’bus’, ’shirtless’, ’train’, ’bed’, ’painting’, ’lamp’, ’metal’, ’paper’, ’sky’, ’luggage’, ’player’, ’face’, ’going’, ’desk’, ’ship’, ’raft’, ’lying’, ’vehicle’, ’trunk’, ’couch’, ’palm’, ’dress’, ’doors’, ’fountain’, ’column’, ’cars’, ’flowers’, ’tire’, ’plane’, ’against’, ’bunch’, ’car’, ’shelf’, ’bunk’, ’boat’, ’dog’, ’vase’, ’animal’, ’pack’, ’anyone’, ’clock’, ’glass’, ’tile’, ’window’, ’chair’, ’phone’, ’across’, ’cake’, ’branches’, ’bicycle’, ’snow’, ’windows’, ’book’, ’curtains’, ’bear’, ’guitar’, ’dish’, ’both’, ’tower’, ’truck’, ’bridge’, ’creepy’, ’cloud’, ’suit’, ’stool’, ’tv’, ’flower’, ’seat’, ’buildings’, ’shoes’, ’bread’, ’hut’, ’donkey’, ’had’, ’were’, ’fire’, ’food’, ’turned’, ’mountains’, ’city’, ’range’, ’inside’, ’carpet’, ’beach’, ’walls’, ’ice’, ’crowd’, ’mirror’, ’brush’, ’road’, ’anything’, ’blanket’, ’clouds’, ’island’, ’building’, ’door’, ’4th’, ’stripes’, ’bottles’, ’cross’, ’gold’, ’smiling’, ’pillow’] Classifiers with average precision below 0.6: [’shadow’, "woman’s", ’was’, ’bright’, ’lol’, ’blue’, ’her’, ’yes’, ’blk’, ’this’, ’from’, ’almost’, ’colored’, ’looking’, ’lighter’, ’far’, ’foreground’, ’yellow’, ’looks’, ’very’, ’second’, ’its’, ’dat’, ’stack’, ’dudes’, ’men’, ’him’, ’arm’, ’smaller’, ’half’, ’piece’, ’out’, ’item’, ’line’, ’stuff’, ’he’, ’spot’, ’green’, ’head’, ’see’, ’be’, ’black’, ’think’, ’leg’, ’way’, ’women’, ’furthest’, ’rt’, ’most’, ’big’, ’grey’, ’only’, ’like’, ’corner’, ’picture’, ’shoulder’, ’no’, ’spiders’, ’n’, ’has’, ’his’, ’we’, ’bit’, ’spider’, ’guys’, ’2’, ’portion’, ’are’, ’section’, ’us’, ’towards’, ’sorry’, ’where’, ’small’, ’gray’, ’image’, ’but’, ’something’, ’center’, ’i’, ’closest’, ’first’, ’middle’, ’those’, ’edge’, ’there’, ’or’, ’white’, ’-’, ’little’, ’them’, ’barely’, ’brown’, ’all’, ’mid’, ’is’, ’thing’, ’dark’, ’by’, ’back’, ’with’, ’other’, ’near’, ’two’, ’screen’, ’so’, ’front’, ’you’, ’photo’, ’up’, ’one’, ’it’, ’space’, ’okay’, ’side’, ’click’, ’part’, ’pic’, ’at’, ’that’, ’area’, ’directly’, ’in’, ’on’, ’and’, ’to’, ’just’, ’of’] Code The code required for reproducing the results reported here can be found at https://github.com/dsg-bielefeld/ image_wac. 1223
2016
115
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1224–1234, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics RBPB: Regularization-Based Pattern Balancing Method for Event Extraction Lei Sha1,2, Jing Liu2, Chin-Yew Lin2, Sujian Li1, Baobao Chang1, Zhifang Sui1 1 Key Laboratory of Computational Linguistics, Ministry of Education School of Electronics Engineering and Computer Science, Peking University 2Knowledge Computing Group, Microsoft Research 1{shalei, lisujian, chbb, szf}@pku.edu.cn 2{liudani,cyl}@microsoft.com Abstract Event extraction is a particularly challenging information extraction task, which intends to identify and classify event triggers and arguments from raw text. In recent works, when determining event types (trigger classification), most of the works are either pattern-only or feature-only. However, although patterns cannot cover all representations of an event, it is still a very important feature. In addition, when identifying and classifying arguments, previous works consider each candidate argument separately while ignoring the relationship between arguments. This paper proposes a Regularization-Based Pattern Balancing Method (RBPB). Inspired by the progress in representation learning, we use trigger embedding, sentence-level embedding and pattern features together as our features for trigger classification so that the effect of patterns and other useful features can be balanced. In addition, RBPB uses a regularization method to take advantage of the relationship between arguments. Experiments show that we achieve results better than current state-of-art equivalents. 1 Introduction Event extraction has become a popular research topic in the area of information extraction. ACE 2005 defines event extraction task1 as three sub-tasks: identifying the trigger of an event, identifying the arguments of the event, and distinguishing their corresponding roles. As an example in Figure 1, there is an “Attack” event 1http://www.itl.nist.gov/iad/mig/tests/ace/2005/ triggered by “tear through” with three arguments. Each argument has one role. In the trigger classification stage, some previous approaches (Grishman et al., 2005; Ji and Grishman, 2008; Liao and Grishman, 2010; Huang and Riloff, 2012) use patterns to decide the types of event triggers. However, pattern-based approaches suffer from low recall since real world events usually have a large variety of representations. Some other approaches (Hong et al., 2011; Li et al., 2013; Lu and Roth, 2012) identify and classify event triggers using a large set of features without using patterns. Although these features can be very helpful, patterns are still indispensable in many cases because they can identify a trigger with the correct event type with more than 96% accuracy according to our data analysis on ACE 2005 data sets. In argument identification and classification, most approaches identify each candidate argument separately without considering the relation between arguments. We define two kinds of argument relations here: (1) Positive correlation: if one candidate argument belongs to one event, then the other is more likely to belong to the same event. For example, in Figure 1, the entity “a waiting shed” shares a common dependency head “tore” with “a powerful bomb”, so when the latter entity is identified as an argument, the former is more likely to be identified. (2) Negative correlation: if one candidate argument belongs to one event, then the other is less likely to belong to the same event. For example, in Figure 1, “bus” is irrelevant to other arguments, so if other entities are identified as arguments “bus” is less likely to be identified. Note that although all the above relation examples have something to do with dependency analysis, the positive/negative relationship depends not only on dependency parsing, but many other aspects as well. 1224 A powerful bomb tore through a waiting shed at the Davao airport while another explosion hit a bus Trigger Event type:Attack Arg1 Role: Instrument Arg2 Role: Target Arg3 Role: Place Figure 1: Event example: This is an event trigger by “tear through” with three arguments In this paper, we propose using both patterns and elaborately designed features simultaneously to identify and classify event triggers. In addition, we propose using a regularization method to model the relationship between candidate arguments to improve the performance of argument identification. Our method is called Regularization-Based Pattern Balancing Method method. The contributions of this paper are as follows: • Inspired by the progress of representation learning, we use trigger embedding, sentence-level embedding, and pattern features together as the our features for balancing. • We proposed a regularization-based method in order to make use of the relationship between candidate arguments. Our experiments on the ACE 2005 data set show that the regularization method does improve the performance of argument identification. 2 Related Work There is a large body of previous work devoted to event extraction. Many traditional works focus on using pattern based methods for identifying event type (Kim and Moldovan, 1993; Riloff and others, 1993; Soderland et al., 1995; Huffman, 1996; Freitag, 1998b; Ciravegna and others, 2001; Califf and Mooney, 2003; Riloff, 1996; Riloff et al., 1999; Yangarber et al., 2000; Sudo et al., 2003; Stevenson and Greenwood, 2005; Grishman et al., 2005; Ji and Grishman, 2008; Liao and Grishman, 2010; Huang and Riloff, 2012). (Shinyama and Sekine, 2006; Sekine, 2006) are unsupervised methods of extracting patterns from open domain texts. Pattern is not always enough, although some methods (Huang and Riloff, 2012; Liu and Strzalkowski, 2012) use bootstrapping to get more patterns. There are also feature-based classification methods (Freitag, 1998a; Chieu and Ng, 2002; Finn and Kushmerick, 2004; Li et al., 2005; Yu et al., 2005). Apart from the above methods, weakly supervised training (pattern-based and rule-based) of event extraction systems have also been explored (Riloff, 1996; Riloff et al., 1999; Yangarber et al., 2000; Sudo et al., 2003; Stevenson and Greenwood, 2005; Patwardhan and Riloff, 2007; Chambers and Jurafsky, 2011). In some of these systems, human work is needed to delete some nonsense patterns or rules. Other methods (Gu and Cercone, 2006; Patwardhan and Riloff, 2009) consider broader context when deciding on role fillers. Other systems take the whole discourse feature into consideration, such as (Maslennikov and Chua, 2007; Liao and Grishman, 2010; Hong et al., 2011; Huang and Riloff, 2011). Ji and Grishman (2008) even consider topic-related documents, proposing a cross-document method. (Liao and Grishman, 2010; Hong et al., 2011) use a series of global features (for example, the occurrence of one event type lead to the occurrence of another) to improve role assignment and event classification performance. Joint models (Li et al., 2013; Lu and Roth, 2012) are also considered an effective solution. (Li et al., 2013) make full use of the lexical and contextual features to get better results. The semi-CRF based method (Lu and Roth, 2012) trains separate models for each event type, which requires a lot of training data. The dynamic multi-pooling convolutional neural network (DMCNN) (Chen et al., 2015) is currently the only widely used deep neural network based approach. DMCNN is mainly used to model contextual features. However, DMCNN still does not consider argument-argument interactions. In summary, most of the above works are either pattern-only or features-only. Moreover, all of these methods consider arguments sepa1225 rately while ignoring the relationship between arguments, which is also important for argument identification. Even the joint method (Li et al., 2013) does not model argument relations directly. We use trigger embedding, sentencelevel embedding, and pattern features together as our features for trigger classification and design a regularization-based method to solve the two problems. 3 ACE Event Extraction Task Automatic Content Extraction (ACE) is an event extraction task. It annotates 8 types and 33 subtypes of events. ACE defines the following terminologies: • Entity: an object or a set of objects in one of the semantic categories of interest • Entity mention: a reference to an entity, usually a noun phrase (NP) • Event trigger: the main word which most clearly expresses an event occurrence • Event arguments: the entity mentions that are involved in an event • Argument roles: the relation of arguments to the event where they participate, with 35 total possible roles • Event mention: a phrase or sentence within which an event is described, including trigger and arguments Given an English document, an event extraction system should identify event triggers with their subtypes and arguments from each sentence. An example is shown in Figure 1. There is an “Attack” event triggered by “tear through” with three arguments. Each argument has a role type such as “Instrument”, “Target”, etc. For evaluation, we follow previous works (Ji and Grishman, 2008; Liao and Grishman, 2010; Li et al., 2013) to use the following criteria to determine the correctness of the predicted event mentions. • A trigger is considered to be correct if and only if its event type and offsets (position in the sentence) can match the reference trigger; • An argument is correctly identified if and only if its event type and offsets can match any reference arguments; • An argument is correctly identified and classified if and only if its event type, offsets, and role match any of the reference arguments. 4 Baseline: JET Extractor for Events Many previous works take JET as their baseline system, including (Ji and Grishman, 2008), (Liao and Grishman, 2010), (Li et al., 2013). JET extracts events independently for each sentence. This system uses pattern matching to predict trigger and event types, then uses statistical modeling to identify and classify arguments. For each event mention in the training corpus of ACE, the patterns are constructed based on the sequences of constituent heads separating the trigger and arguments. After that, three Maximum Entropy classifiers are trained using lexical features. • Argument Classifier: to distinguish arguments from non-arguments • Role Classifier: to label arguments with an argument role • Reportable-Event Classifier: to determine whether there is a reportable event mentioned (worth being taken as an event mention) according to the trigger, event type, and a set of arguments Figure 2(a) shows the whole test procedure. In the test procedure, each sentence is scanned for nouns, verbs and adjectives as trigger candidates. When a trigger candidate is found, the system tries to match the context of the trigger against the set of patterns associated with that trigger. If this pattern matching process is successful, the best pattern will assign some of the entity mentions in the sentence as arguments of a potential event mention. Then JET uses the argument classifier to judge if the remaining entity mentions should also be identified. If yes, JET uses the role classifier to assign it a role. Finally, the reportable-event classifier is applied to decide whether this event mention should be reported. 5 Regularization-Based Pattern Balancing Method Different with JET, as illustrated in Figure 2(b), our work introduces two major improvements: (1) balance the effect of patterns and other features (2) 1226 ... In Baghdad, a cameraman died when ... n, v, adj: trigger candidate trigger = died find best pattern① get arguments & roles yes MaxEnt for argument② MaxEnt for role③ MaxEnt for reportable event④ (a) The flow chart of JET ... In Baghdad, a cameraman died when ... n, v, adj: trigger candidate trigger = died SVM for Event type① MaxEnt for argument② MaxEnt for role③ Regularization MaxEnt for reportable event④ (b) The flow chart of our approach Figure 2: The left is the flow chart for JET. The right is the flow chart for our approach. The thick line block is our contribution use a regularization-based method to make full use of the relation between candidate arguments. The thick-edge blocks in Figure 2(b) represent our improvements. Since JET only uses patterns when predicting the event type, we use a SVM classifier to decide each candidate trigger’s event type (classify the trigger). This classifier uses trigger embedding, sentence-level embedding and pattern features together for balancing. After the outputs of argument and role classifier are calculated, we make use of the argument relationship to regularize for a better result. 5.1 Balancing the Pattern effects Deciding the event type is the same as classifying an event trigger. JET only uses patterns in this step: for a candidate trigger, we find that the best matched pattern and the corresponding event type are assigned to this trigger. We propose using feature-based methods while not ignoring the effect of patterns. Inspired by progress in representation learning, we use trigger embedding, sentence-level embedding and pattern embedding together as our features. A pattern example is as follows: (weapon) tore [through] (building) at (place) ⇒Attack{Roles...} where each pair of round brackets represents an entity and the word inside is one of the 18 entity types defined by UIUC NER Tool2. The word in the square brackets can choose to exist or not. After the right arrow there is an event schema, which can tell us what kind of event this is and which roles each entity should take. Each pattern has a corresponding event type. A candidate trigger may match more than one pattern so that it has an event type distribution. Assume that there are NT event types in total, we denote the pattern feature vector (namely, the event type’s probability distribution calculated by the trigger’s pattern set) as PE ∈RNT , which is calculated by Eq 1. PE(i) = #(matched patterns of event type i) #(all matched patterns) (1) Trigger embeddings are obtained using WORD2VEC3 with the default “text8” training text data with length 200. Since all of the NPs are potential roles in the event, they must contain the main information of the event. We extract all the NPs in the sentence and take the average word embedding of these NPs’ head word as the sentence-level embedding. For example, in Figure 1, these NPs’ head words are bomb, shed, and airport. Pattern feature vectors, as distributions of event types over patterns, are also composed using 2http://cogcomp.cs.illinois.edu/page/software view/NETagger 3http://code.google.com/p/word2vec/ 1227 continuous real values, which allows them to be viewed as a kind of pattern embedding and treated similarly to trigger and sentence embedding. 5.2 Capturing the Relationship Between Arguments We find that there are two typical relations between candidate arguments: (1) positive correlation: if one candidate argument belongs to one event, then the other is more likely to belong to the same event; (2) negative correlation: if one candidate argument belongs to one event, then the other is less likely to belong to the same event. We calculate a score for all the candidate arguments in a sentence to judge the quality of the argument identification and classification. For capturing the two kinds of relations, we intend to make that (1) the more positive relations the chosen arguments have, the higher the score is; (2) the more negative relations the chosen arguments have, the lower the score is. For a trigger, if there are n candidate arguments, we set a n × n matrix C to represent the relationship between arguments. If Ci,j = 1, then argument i and argument j should belong to the same event. If Ci,j = −1, then argument i and argument j cannot belong to the same event. We will illustrate how to get matrix C in the next section. We use a n-dim vector X to represent the identification result of arguments. Each entry of X is 0 or 1. 0 represents “noArg”, 1 represents “arg”. X can be assigned by maximizing E(X) as defined by Eq 2. X = argmax X E(X) E(X) = λ1XT CX + λ2P arg sum + (1 −λ1 −λ2)P role sum (2) Here, XT CX means adding up all the relationship values if the two arguments are identified. Hence, the more the identified arguments are related, the larger the value XT CX is. P arg sum is the sum of all chosen arguments’ probabilities. The probability here is the output of the arguments’ maximum entropy classifier. P role sum is the sum of all the classified roles’ probabilities. The probability here is the output of the roles’ maximum entropy classifier. Eq 2 shows that while we should identify and classify the candidate arguments with a larger probability, the argument relationship evaluation should also be as large as possible. The arguments should also follow the following constraints. These constraints together with Eq 2 can make the argument identification and classification help each other for a better result. • Each entity can only take one role • Each role can belong to one or more entities • The role assignment must follow the event schema of the corresponding type, which means that only the roles in the event schema can occur in the event mention We use the Beam Search method to search for the optimal assignment X as is shown in Algorithm 1. The hyperparameters λ1 and λ2 can be chosen according to development set. Input: Argument relationship matrix: C the argument probabilities required by P arg sum the role probabilities required by P role sum Data: K: Beam size n: Number of candidate arguments Output: The best assignment X Set beam B ←[ϵ] ; for i ←1 · · · n do buf←{z′ ◦l|z′ ∈B, l ∈{0, 1}}; B ←[ϵ] ; while j ←1 · · · K do xbest = argmaxx∈buf E(x); B ←B ∪{xbest}; buf←buf−{xbest}; end end Sort B descendingly according to E(X); return B[0]; Algorithm 1: Beam Search decoding algorithm for event extraction. ◦means to concatenate an element to the end of a vector. 5.2.1 Training the Argument Relationship Structure The argument relationship matrix C is very important in the regularization process. We train a maximum entropy classifier to predict the connection between two entities. We intend to classify the entity pairs into three classes: positive correlation, negative correlation, and unclear correlation. The entity pairs in the ground truth events (in training 1228 data) are used for our training data. We choose the following features: • TRIGGER: the trigger of the event. The whole model is a pipelined model, so when classifying the argument relationship, the trigger has been identified and classified. So the “trigger” is a feature of the argument relation. • ENTITY DISTANCE: the distance between the two candidate arguments in the sentence, namely the number of intervening words • Whether the two candidate arguments occur on the same side of the trigger • PARENT DEPENDENCY DISTANCE: the distance between the two candidate arguments’ parents in the dependency parse tree, namely, the path length. • PARENT POS: if the two candidate arguments share the same parent, take the common parent’s POS tag as a feature • Whether the two candidate arguments occur on the same side of the common parent if the two candidate arguments share the same parent For an entity pair, if both of the entities belong to the same event’s arguments, we take it as positive example. For each positive example, we randomly exchange one of the entities with an irrelevant entity (an irrelevant entity is in the same sentence as the event, but it is not the event’s argument) to get a negative example. In the testing procedure, we predict the relationship between entity i and entity j using the maximum entropy classifier. When the output of the maximum entropy classifier is around 0.5, it is not easy to figure out whether it is the first relation or the second. We call this kind of information “uncertain information”(unclear correlation). For better performance, we strengthen the certain information and weaken the uncertain information. We set two thresholds, if the output of the maximum entropy classifier is larger than 0.8, we set Ci,j = 1 (positive correlation), if the output is lower than 0.2, we set Ci,j = −1 (negative correlation), otherwise, we set Ci,j = 0 (unclear correlation). The strengthen mapping is similar to the hard tanh in neural network. If we do not do this, according to the experiment, the performance cannot beat most of the baselines since the uncertain information has very bad noise. 6 Experiments 6.1 Data We utilize ACE 2005 data sets as our testbed. As is consistent with previous work, we randomly select 10 newswire texts from ACE 2005 training corpora as our development set, and then conduct blind test on a separate set of 40 ACE 2005 newswire texts. The remaining 529 documents in ACE training corpus are used as the training data. The training dataset of the argument relationship matrix contains 5826 cases (2904 positive and 2922 negative) which are randomly generated according to the ground truth in the 529 training documents. 6.2 Systems to Compare We compare our system against the following systems: • JET is the baseline of (Grishman et al., 2005), we report the paper values of this method; • Cross-Document is the method proposed by Ji and Grishman (2008), which uses topic-related documents to help extract events in the current document; • Cross-Event is the method proposed by Liao and Grishman (2010), which uses documentlevel information to improve the performance of ACE event extraction. • Cross-Entity is the method proposed by Hong et al. (2011), which extracts events using cross-entity inference. • Joint is the method proposed by Li et al. (2013), which extracts events based on structure prediction. It is the best-reported structure-based system. • DMCNN is the method proposed by Chen et al. (2015), which uses a dynamic multipooling convolutional neural network to extract events. It is the only neural network based method. The Cross-Document, Cross-Event and CrossEntity are all extensions of JET. Among these 1229 Method Trigger Argument Argument Classification Identification Role P R F1 P R F1 P R F1 JET 67.6 53.5 59.7 46.5 37.2 41.3 41.0 32.8 36.5 Cross-Event 68.7 68.9 68.8 50.9 49.7 50.3 45.1 44.1 44.6 Cross-Entity 72.9 64.3 68.3 53.4 52.9 53.1 51.6 45.5 48.3 Joint 73.7 62.3 67.5 69.8 47.9 56.8 64.7 44.4 52.7 DMCNN 75.6 63.6 69.1 68.8 51.9 59.1 62.2 46.9 53.5 RBPB(JET) 62.3 59.9 61.1 50.4 45.8 48.0 41.9 36.5 39.0 + ET 66.7 65.9 66.3 60.6 56.7 58.6 49.2 48.3 48.7 + Regu 67.2 61.7 64.3 62.8 57.5 60.0 52.6 48.4 50.4 + ET + Regu 70.3 67.5 68.9 63.2 59.4 61.2 54.1 53.5 53.8 Table 1: Overall performance with gold-standard entities, timex, and values, the candidate arguments are annotated in ACE 2005. “ET” means the pattern balancing event type classifier, “Regu” means the regularization method methods, Cross-Event, Cross-Entity, and DMCNN make use of the gold-standard entities, timex, and values annotated in the corpus as the argument candidates. Cross-Document uses the JET system to extract candidate arguments. Li et al. (2013) report the performance with both gold-standard argument candidates and predicted argument candidates. Therefore, we compare our results with methods based on gold argument candidates in Table 1 and methods based on predicted argument candidates in Table 2. We have done a series of ablation experiments: • RBPB(JET): Our own implementation of JET • RBPB(JET) + ET: Add pattern balanced event type classifier to RBPB(JET) • RBPB(JET) + Regu: Add regularization mechanism to RBPB(JET) • RBPB(JET) + ET + Regu: Add both pattern balanced event type classifier and regularization mechanism to RBPB(JET) 6.2.1 The Selection of Hyper-parameters We tune the coefficients λ1 and λ2 of Eq 2 on the development set, and finally we set λ1 = 0.10 and λ2 = 0.45. Figure 3 shows the variation of argument identification’s F1 measure and argument classification’s F1 measure when we fix one parameter and change another. Note that the third coefficient 1 −λ1 −λ2 must be positive, which is the reason why the curve decreases sharply when λ2 is fixed and λ1 > 0.65. Therefore, Figure 3 illustrates that the robustness of our method is very good, which means if the hyperparameters λ1, λ2 are larger or smaller, it will not affect the result very much. 6.3 Experiment Results We conduct experiments to answer the following questions. (1) Can pattern balancing lead to a higher performance in trigger classification, argument identification, and classification while retaining the precision value? (2) Can the regularization step improve the performance of argument identification and classification? Table 1 shows the overall performance on the blind test set. We compare our results with the JET baseline as well as the Cross-Event, CrossEntity, and joint methods. When adding the event type classifier, in the line titled “+ ET”, we see a significant increase in the three measures over the JET baseline in recall. Although our trigger’s precision is lower than RBPB(JET), it gains 5.2% improvement on the trigger’s F1 measure, 10.6% improvement on argument identification’s F1 measure and 9.7% improvement on argument classification’s F1 measure. We also test the performance with argument candidates automatically extracted by JET in Table 2, our approach “+ ET” again significantly outperforms the JET baseline. Remarkably, our result is comparable with the Joint model although we only use lexical features. The line titled “+ Regu” in Table 1 and Table 2 represents the performance when we only use the regularization method. In Table 1, Compared to the four baseline systems, the argument identifi1230 Method Trigger F1 Arg id F1 Arg id+cl F1 JET 59.7 42.5 36.6 Cross-Document 67.3 46.2 42.6 Joint 65.6 41.8 RBPB(JET) 60.4 44.3 37.1 + ET 66.0 47.8 39.7 + Regu 64.8 54.6 42.0 + ET + Regu 67.8 55.4 43.8 Table 2: Overall performance with predicted entities, timex, and values, the candidate arguments are extracted by JET. “ET” is the pattern balancing event type classifier, “Regu” is the regularization method 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.58 0.585 0.59 0.595 0.6 0.605 0.61 0.615 0.62 Another coefficient Argument Identify F1 λ2=0.45, λ1∈(0,1] λ1=0.10, λ2∈(0,1] (a) Arg Identify 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.47 0.48 0.49 0.5 0.51 0.52 0.53 Another coefficient Argument Classifier F1 λ2=0.45, λ1∈(0,1] λ1=0.10, λ2∈(0,1] (b) Arg Classify Figure 3: The trend graph when fix one coefficient and change another cation’s F1 measure of “+ Regu” is significantly higher. In Table 2, the “+ Regu” again gains a higher F1 measure than the JET, Cross-Document, joint model baseline and “+ ET”. The complete approach is denoted as “RBPB” in Table 1 and Table 2. Remarkably, our approach performances comparable in trigger classification with the state-of art methods: Cross-Document, Cross-Event, Cross-Entity, Joint model, DMCNN and significantly higher than them in argument identification as well as classification although we did not use the cross-document, cross-event information or any global feature. Therefore, the relationship between argument candidates can indeed contribute to argument identification performance. The event type classifier also contributes a lot in trigger identification & classification. We do the Wilcoxon Signed Rank Test on trigger classification, argument identification and argument classification, all the three have p < 0.01. A more detailed study of the pattern feature’s effect is shown in Table 3. We can see that RBPB with both plain feature and pattern feature can gain Method (RBPB) Trigger Arg id Arg id+cl + Plain feature 66.0 60.5 50.4 + Pattern feature 65.8 60.1 49.2 + Both 68.9 61.2 53.8 Table 3: The effect (F1 value) of pattern feature much better performance than with two kinds of features alone. However, our approach is just a pipeline approach which suffers from error propagation and the argument performance may not affect the trigger too much. We can see from Table 1 that although we use gold argument candidates, the trigger performance is still lower than DMCNN. Another limitation is that our regularization method does not improve the argument classification too much since it only uses constraints to affect roles. Future work may be done to solve these two limitations. 6.4 Analysis of Argument Relationships The accuracy of the argument relationship maxent classifier is 82.4%. Fig 4 shows an example of the argument relationship matrix, which works 1231 Powerful bomb A waiting shed Davao airport Bus Powerful bomb A waiting shed Davao airport Bus Powerful bomb A waiting shed Davao airport Bus Powerful bomb A waiting shed Davao airport Bus Figure 4: The Argument Relationship Matrix. Left is the origin matrix. Right is the strengthened matrix for the sentence in Fig 1. In the left part of Fig 4, we can see the argument relationship we capture directly (the darker blue means stronger connection, lighter blue means weaker connection). After strengthening, on the right, the entities with strong connections are classified as positive correlations (the black squares), weak connections are classified as negative correlations (the white squares). Others (the grey squares) are unclear correlations. We can see that positive correlation is between “Powerful bomb” and “A waiting shed” as well as “A waiting shed” and “Davao airport”. Therefore, these entities tend to be extracted at the same time. However, “Powerful bomb” and “Bus” has a negative correlation, so they tend not to be extracted at the same time. In practice, the argument probability of “Powerful bomb” and “A waiting shed” are much higher than the other two. Therefore, “Powerful bomb”, “A waiting shed” and “Davao airport” are the final extraction results. 7 Conclusion In this paper, we propose two improvements based on the event extraction baseline JET. We find that JET depends too much on event patterns for event type priori and JET considers each candidate argument separately. However, patterns cannot cover all events and the relationship between candidate arguments may help when identifying arguments. For a trigger, if no pattern can be matched, the event type cannot be assigned and the arguments cannot be correctly identified and classified. Therefore, we develop an event type classifier to assign the event type, using both pattern matching information and other features, which gives our system the capability to deal with failed match cases when using patterns alone. On the other hand, we train a maximum entropy classifier to predict the relationship between candidate arguments. Then we propose a regularization method to make full use of the argument relationship. Our experiment results show that the regularization method is a significant improvement in argument identification over previous works. In summary, by using the event type classifier and the regularization method, we have achieved a good performance in which the trigger classification is comparable to state-of-theart methods, and the argument identification & classification performance is significantly better than state-of-the-art methods. However, we only use sentence-level features and our method is a pipelined approach. Also, the argument classification seems not to be affected too much by the regularization. Future work may be done to integrate our method into a joint approach, use some global feature, which may improve our performance. The code is available at https://github.com/shalei120/ RBPB/tree/master/RBET_release Acknowledgements We would like to thank our three anonymous reviewers for their helpful advice on various aspects of this work. This research was supported by the National Key Basic Research Program of China (No.2014CB340504) and the National Natural Science Foundation of China (No.61375074,61273318). The contact author for this paper is Zhifang Sui. 1232 References Mary Elaine Califf and Raymond J Mooney. 2003. Bottom-up relational learning of pattern matching rules for information extraction. The Journal of Machine Learning Research, 4:177–210. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 976–986. Association for Computational Linguistics. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–176, Beijing, China, July. Association for Computational Linguistics. Hai Leong Chieu and Hwee Tou Ng. 2002. A maximum entropy approach to information extraction from semi-structured and free text. AAAI/IAAI, 2002:786–791. Fabio Ciravegna et al. 2001. Adaptive information extraction from text by rule induction and generalisation. In International Joint Conference on Artificial Intelligence, volume 17, pages 1251–1256. LAWRENCE ERLBAUM ASSOCIATES LTD. Aidan Finn and Nicholas Kushmerick. 2004. Multilevel boundary classification for information extraction. Springer. Dayne Freitag. 1998a. Multistrategy learning for information extraction. In ICML, pages 161–169. Dayne Freitag. 1998b. Toward general-purpose learning for information extraction. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 404–408. Association for Computational Linguistics. Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyus english ace 2005 system description. ACE, 5. Zhenmei Gu and Nick Cercone. 2006. Segment-based hidden markov models for information extraction. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 481–488. Association for Computational Linguistics. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1127– 1136. Association for Computational Linguistics. Ruihong Huang and Ellen Riloff. 2011. Peeling back the layers: detecting event role fillers in secondary contexts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1137–1147. Association for Computational Linguistics. Ruihong Huang and Ellen Riloff. 2012. Bootstrapped training of event extraction classifiers. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 286–295. Association for Computational Linguistics. Scott B Huffman. 1996. Learning information extraction patterns from examples. In Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing, pages 246–260. Springer. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of the 46st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 254–262. Jun-Tae Kim and Dan I Moldovan. 1993. Acquisition of semantic patterns for information extraction from corpora. In Artificial Intelligence for Applications, 1993. Proceedings., Ninth Conference on, pages 171–176. IEEE. Yaoyong Li, Kalina Bontcheva, and Hamish Cunningham. 2005. Using uneven margins svm and perceptron for information extraction. In Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 72–79. Association for Computational Linguistics. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73–82, Sofia, Bulgaria, August. Association for Computational Linguistics. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 789–797. Association for Computational Linguistics. Ting Liu and Tomek Strzalkowski. 2012. Bootstrapping events and relations from text. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 296–305. Association for Computational Linguistics. 1233 Wei Lu and Dan Roth. 2012. Automatic event extraction with structured preference modeling. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 835–844. Association for Computational Linguistics. Mstislav Maslennikov and Tat-Seng Chua. 2007. A multi-resolution framework for information extraction from free text. In ANNUAL MEETINGASSOCIATION FOR COMPUTATIONAL LINGUISTICS, volume 45, page 592. Citeseer. Siddharth Patwardhan and Ellen Riloff. 2007. Effective information extraction with semantic affinity patterns and relevant regions. In EMNLP-CoNLL, volume 7, pages 717–727. Citeseer. Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for information extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 151– 160. Association for Computational Linguistics. Ellen Riloff et al. 1993. Automatically constructing a dictionary for information extraction tasks. In AAAI, pages 811–816. Ellen Riloff, Rosie Jones, et al. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In AAAI/IAAI, pages 474–479. Ellen Riloff. 1996. Automatically generating extraction patterns from untagged text. In Proceedings of the national conference on artificial intelligence, pages 1044–1049. Satoshi Sekine. 2006. On-demand information extraction. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 731–738. Association for Computational Linguistics. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 304–311. Association for Computational Linguistics. Stephen Soderland, David Fisher, Jonathan Aseltine, and Wendy Lehnert. 1995. Crystal: Inducing a conceptual dictionary. arXiv preprint cmp-lg/9505020. Mark Stevenson and Mark A Greenwood. 2005. A semantic approach to ie pattern induction. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 379–386. Association for Computational Linguistics. Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2003. An improved extraction pattern representation model for automatic ie pattern acquisition. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 224–231. Association for Computational Linguistics. Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic acquisition of domain knowledge for information extraction. In Proceedings of the 18th conference on Computational linguistics-Volume 2, pages 940–946. Association for Computational Linguistics. Kun Yu, Gang Guan, and Ming Zhou. 2005. Resume information extraction with cascaded hybrid model. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 499– 506. Association for Computational Linguistics. 1234
2016
116
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1235–1244, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Neural Network-Based Model for Japanese Predicate Argument Structure Analysis Tomohide Shibata and Daisuke Kawahara and Sadao Kurohashi Graduate School of Informatics, Kyoto University Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan {shibata, dk, kuro}@i.kyoto-u.ac.jp Abstract This paper presents a novel model for Japanese predicate argument structure (PAS) analysis based on a neural network framework. Japanese PAS analysis is challenging due to the tangled characteristics of the Japanese language, such as case disappearance and argument omission. To unravel this problem, we learn selectional preferences from a large raw corpus, and incorporate them into a SOTA PAS analysis model, which considers the consistency of all PASs in a given sentence. We demonstrate that the proposed PAS analysis model significantly outperforms the base SOTA system. 1 Introduction Research on predicate argument structure (PAS) analysis has been conducted actively these days. The improvement of PAS analysis would benefit many natural language processing (NLP) applications, such as information extraction, summarization, and machine translation. The target of this work is Japanese PAS analysis. The Japanese language has the following characteristics: • head final, • free word order (among arguments), and • postpositions function as (surface) case markers. Japanese major surface cases are が(ga), を(wo), and に(ni), which correspond to Japanese postpositions (case markers). We call them nominative case, accusative case, and dative case, respectively. In this paper, we limit our target cases to these three cases. Note that though they are surface cases, they roughly correspond to Arg1, Arg2, and Arg3 of English semantic role labeling based on PropBank. Japanese PAS analysis has been considered as one of the most difficult basic NLP tasks, due to the following two phenomena. Case disappearance When a topic marker は (wa) is used or a noun is modified by a relative clause, their case markings disappear as in the following examples.1 (1) a. ジョンは John-TOP パンを bread-ACC 食べた。 ate →ジョンが John-NOM (John ate bread.) b. パンは bread-TOP ジョンが John-NOM 食べた。 ate →パンを bread-ACC (John ate bread.) (2) a. パンを食べたジョンを... → bread-ACC ate John-ACC ジョンが(食べた) John-NOM (ate) (John, who ate bread, ...) b. ジョンが John-NOM 食べたパンが...→ ate bread-NOM パンを(食べた) (ate) bread-ACC (Bread, which John ate, ...) In the example sentences (1a) and (1b), since a topic marker はis used, the NOM and ACC case markers disappear. In the example sentences (2a) and (2b), since a noun is modified by a relative clause, the NOM case of “ジョン” (John) for “食 べた” (eat) and ACC case of “パン” (bread) for “食べた” disappear. Argument omission Arguments are very often omitted in Japanese sentences. This phenomenon is totally different from English sentences, where the word order is fixed and pronouns are used con1In this paper, we use the following abbreviations: NOM (nominative), ACC (accusative), DAT (dative) and TOP (topic marker). 1235 !"#$%&&'#$(&&)*+,&&&-./0! 1 "#$%&'()1*+,-.&/001 *#12$3&-%.1 -3,1 .,4,%.,%56!4-+78%21 5-7,!-%-967871 :(;!! <,+#!-%-4$#+-! +,7#91=#%1 >&:(;1 >&/001 Figure 1: An example of PAS analysis. Input sentence: “ジョンはパンを買って、食べた。” (John bought bread, and ate it.) sistently. For example, let us compare the following parallel Japanese and English sentences: (3) a. ジョンは John-TOP パンを bread-ACC 買って、 bought 食べた。 ate b. John bought bread, and ate it. The dependency parse of (3a) is shown in Figure 1. In general, the first phrase with a topic marker はis treated as modifying the final predicate according to the guidelines of Japanese dependency annotation. As a result, “買って” (bought) has no NOM argument (omitted), and “食べた” (ate) has no ACC argument. Note that “食べた” has an argument “ジョン” (John), but its case does not appear. In the case of the parallel sentences (4) below, again we can witness the difficulty of Japanese PAS analysis. (4) a. パンを bread-ACC 買った bought ジョンは John-TOP 急いで hurry 食べた。 ate b. John who bought bread ate it in a hurry. Although all the case arguments of the predicates “bought” and “ate” are explicit in (4b), the case of “ジョン” (John) for “買った” (bought) and that for “食べた” (ate) are hidden, and the ACC argument of “食べた” (ate) is omitted in (4a). Many researchers have been tackling Japanese PAS analysis (Taira et al., 2008; Imamura et al., 2009; Hayashibe et al., 2011; Sasano and Kurohashi, 2011; Hangyo et al., 2013; Ouchi et al., 2015). However, because of the two aforementioned characteristics in Japanese sentences, the accuracy of Japanese PAS analysis for omitted (zero) arguments remains around 40%. This paper proposes a novel Japanese PAS analysis model based on a neural network (NN) framework, which has been proved to be effective for several NLP tasks recently. To unravel the tangled situation in Japanese, we learn selectional preferences from a large raw corpus, and incorporate them into a SOTA PAS analysis model proposed by Ouchi et al. (2015), which considers the consistency of all PASs in a given sentence. This model is achieved by an NN-based two-stage model that acquires selectional preferences in an unsupervised manner in the first stage and predicts PASs in a supervised manner in the second stage as follows. 1. The most important clue for PAS analysis is selectional preferences, that is, argument prediction from a predicate phrase. For example, how likely the phrase “パンを買った” (bought bread) takes “ジョン” (John) as its NOM argument. Such information cannot be learned from a medium-sized PAS annotated corpus with size of the order of ten-thousand sentences; it is necessary to use a huge raw corpus by an unsupervised method. Ouchi et al. (2015) did not utilize such knowledge extracted from a raw corpus. Some work has utilized PMI between a predicate and an argument, or case frames obtained from a raw corpus. However, this is discrete word-based knowledge, not generalized semantic knowledge. As the first stage of the method, we learn a prediction score from a predicate phrase to an argument by an NN-based method. The resultant vector representations of predicates and arguments are used as initial vectors for the second stage of the method. 2. In the second stage, we calculate a score that a predicate in a given sentence takes an element in the sentence as an argument using NN framework. We use the prediction score in the first stage as one feature for the second stage NN. The system by Ouchi et al. (2015) used a manually designed feature template to take the interactions of the atomic features into consideration. In the case of an NN framework, no feature template is required, and a hidden layer in an NN can capture the interactions of the atomic features automatically and flexibly. We demonstrate that the proposed PAS analysis model outperforms the SOTA system by Ouchi et al. (2015). 1236 2 Related Work Several methods for Japanese PAS analysis have been proposed. The methods can be divided into three types: (i) identifying one case argument independently per predicate (Taira et al., 2008; Imamura et al., 2009; Hayashibe et al., 2011), (ii) identifying all the three case arguments (NOM, ACC, and DAT) simultaneously per predicate (Sasano and Kurohashi, 2011; Hangyo et al., 2013), and (iii) identifying all case arguments of all predicates in a sentence (Ouchi et al., 2015). The third method can capture interactions between predicates and their arguments, and thus performs the best among the three types. This method is adopted as our base model (see Section 3 for details). Most methods for PAS analysis handle both intra-sentential and inter-sentential zero anaphora. For identifying inter-sentential zero anaphora, an antecedent has to be searched in a broad search space, and the salience of discourse entities has to be captured. Therefore, the task of identifying inter-sentential zero anaphora is more difficult than that of intra-sentential zero anaphora. Thus, Ouchi et al. (2015) and Iida et al. (2015) focused on only intra-sentential zero anaphora. Following this trend, this paper focuses on intra-sentential zero anaphora. Recently, NN-based approaches have achieved improvement for several NLP tasks. For example, in transition-based parsing, Chen and Manning (2014) proposed an NN-based approach, where the words, POS tags, and dependency labels are first represented by embeddings individually. Then, an NN-based classifier is built to make parsing decisions, where an input layer is a concatenation of embeddings of words, POS tags, and dependency labels. This model has been extended by several studies (Weiss et al., 2015; Dyer et al., 2015; Ballesteros et al., 2015). In semantic role labeling, Zhou and Xu (2015) propose an end-to-end approach using recurrent NN, where an original text is the input, and semantic role labeling is performed without any intermediate syntactic knowledge. Following these approaches, this paper proposes an NN-based PAS method. 3 Base Model The model proposed by Ouchi et al. (2015) is adopted as our base model (Figure 2). We briefly introduce this base model before describing our !"#$%&&'#$(&&&)*+,&&&-./0! 1 "#$%&1 !'()#*+,-1 ./012*3441!.(%5)&*1#21 !1&01 !"#! '()#1 !"##! )2! .%61 -.3! 01&! '#! ./0121 )2! .%6! -.3! 01&1 73+1 a1 a2 a3 a4 a5 p1 p2 344!1 8,91 344!1 73+1 8,91 Figure 2: Our base model (Ouchi et al., 2015). proposed model. 3.1 Predicate-Argument Graph In this model, for an input sentence, a bipartite graph is constructed, consisting of the set of predicate and argument nodes. This is called Predicate-Argument Graph (PA Graph). A PA graph represents a possible interpretation of the input sentence, including case analysis result and zero anaphora resolution result. A PA graph is a bipartite graph ⟨A, P, E⟩, where A is the node set consisting of candidate arguments, P is the node set consisting of predicates, and E is the set of edges. A PA graph is defined as follows: A = {a1, . . . , an, an+1 = NULL} P = {p1, . . . , pm} E = {⟨a, p, c⟩|deg(p, c) = 1, ∀a ∈A, ∀p ∈P, ∀c ∈C} where n and m represent the number of predicates and arguments, and C denotes the case role set (NOM, ACC, and DAT). An edge e ∈E is represented by a tuple ⟨a, p, c⟩, indicating the edge with a case role c connecting a candidate argument node a and a predicate node p. deg(p, c) is the number of the edges with a case role c outgoing from a predicate node p. An admissible PA graph satisfies the constraint deg(p, c) = 1, which means each predicate node p has only one edge with a case role c. A dummy node an+1 is added, which is defined for the cases where a predicate requires no case argument (e.g. when the predicate node “ある” (exist) connects a NULL node with a case ACC, this means this predicate takes 1237 no ACC argument) or the required argument does not appear in the sentence. In the bipartite graph shown in Figure 2, the three kinds of edge lines have the meaning as follows: solid line: the argument node and the predicate node has a dependency relation, and the argument node is followed by a case marking postposition. In this case, these nodes have a relation through its corresponding case marking postposition. Therefore, this edge is fixed. dashed line: the argument node and the predicate node has a dependency relation, and the argument node is not followed by a case marking postposition. These nodes are likely to have a relation2, but the case role is unknown. Identifying this case role corresponds to case analysis. dotted line: the argument node and the predicate node do not have a dependency relation. Identifying this edge and its case role corresponds to zero anaphora resolution. For an input sentence x, a scoring function Score(x, y) is defined for a candidate graph y, and the PA graph that has the maximum score is searched. ˜y = argmax y∈G(x) Score(x, y) (1) where G(x) is a set of admissible PA graphs for the input sentence x. Score(x, y) is defined as follows3: ∑ e∈E(y) scorel(x, e)+ ∑ ei,ej∈Epair(y) scoreg(x, ei, ej). (2) scorel(x, e) = θl · ϕl(x, e) scoreg(x, ei, ej) = θg · ϕg(x, ei, ej) (3) where E(y) is the edge set on the candidate graph y, Epair(y) is a set of edge pairs in the edge set E(y), scorel(x, e) and scoreg(x, ei, ej) represent 2For example, in the sentence “今日は暑い” (today-TOP hot), the predicate “暑い” does not take “今日”, which represents time, as an argument. Therefore, these nodes do not always have a relation. 3Ouchi et al. (2015) introduce two models: Per-Case Joint Model and All-Cases Joint Model. Since All-Cases Joint Model performed better than Per-Case Joint Model, AllCases Joint Model is adopted as our base model. a local score for the edge e and a global score for the edge pair ei and ej, ϕl(x, e) and ϕg(x, ei, ej) represent local features and global features. While ϕl(x, e) is defined for each edge e, ϕg(x, ei, ej) is defined for each edge pair ei, ej (i ̸= j) . θl and θg represent model parameters for local and global features. By using global scores, the interaction between multiple case assignments of multiple predicates can be considered. 3.2 Inference and Training Since global features make the inference of finding the maximum scoring PA graph more difficult, the randomized hill-climbing algorithm proposed in (Zhang et al., 2014) is adopted. Figure 3 describes the pseudo code for hillclimbing algorithm. First, an initial PA graph y(0) is sampled from the set of admissible PA graph G(x). Then, the union Y is constructed from the set of neighboring graphs NeighborG(y(t)), which is a set of admissible graphs obtained by changing one edge in y(t), and the current graph y(t). The current graph y(t) is updated to a higher scoring graph y(t+1). This process continues until no more improvement is possible, and finally an optimal graph ˜y can be obtained. Input: sentence x, parameter θ Output: a locally optimal PA graph ˜y 1 Sample a PA graph y(0) from G(x) 2 t ←0 3 repeat 4 Y ←NeighborG(y(t)) ∪y(t) 5 y(t+1) ←argmax y∈Y Score(x, y; θ) 6 t ←t + 1 7 until y(t) = y(t+1) 8 return ˜y ←y(t) Figure 3: Hill climbing algorithm for obtaining optimal PA graph. Given N training examples D = {(x, ˆy)}N k , the model parameter θ are estimated. θ is the set of θl and θg, and is estimated by averaged perceptron (Collins, 2002) with a max-margin framework (Taskar et al., 2005). 1238 !"#$! "##$%&% &'! '()*+$% ()! %,%'$+&! -./% 011% *! 2$%3% +,-! "'')$% 4$5"67$! %"8')$%% ...% '#$2*+&% Figure 4: Argument prediction model. In the PAS “警察” (police) NOM “犯人” (suspect) ACC “逮 捕” (arrest), “警察” with the NOM case is predicted given the predicate “逮捕” (arrest) and its ACC “犯人” (suspect). 4 Proposed Model 4.1 Argument Prediction Model No external knowledge is utilized in the base model. One of the most important types of knowledge in PAS analysis is selectional preferences. Sasano and Kurohashi (2011) and Hangyo et al. (2013) extract knowledge of the selectional preferences in the form of case frames from a raw corpus, and the selectional preference score is used as a feature. In this work, argument prediction model is trained using a neural network from a raw corpus, in a similar way to Titov and Khoddam (2015) and Hashimoto et al. (2014). PASs are first extracted from an automaticallyparsed raw corpus, and in each PAS, the argument ai is generated with the following probability p(ai|PAS−ai): p(ai|PAS−ai) = exp(vT aiW T ai(Wpredvpred + ∑ j̸=i Wajvaj)) Z (4) where PAS−ai represents a PAS excluding the target argument ai, vpred, vai and vaj represent embeddings of the predicate, argument ai and argument aj, and Wpred, Wai, and Waj represent transformation matrices for a predicate and an argument ai and aj. Z is the partition function. Figure 4 illustrates the argument prediction model. The PAS “警察” (police) NOM “犯人” (suspect) ACC “逮捕” (arrest)” is extracted from a raw corpus, and the probability of NOM argument “警察” given the predicate “逮捕” and its ACC argument “犯人” is calculated. All the parameters including predicate/argument embeddings and transformation matrices are trained, so that the likelihood given by Equation (4) is high. Since the denominator of Equation (4) is impractical to be calculated since the number of vocabulary is enormous, negative sampling (Mikolov et al., 2013) is adopted. In the example shown in Figure 4, as for a NOM argument, negative examples, such as “机” (desk) and “りんご” (apple), are drawn from the noise distribution, which is a unigram distribution raised to the 3/4th power. In each PAS, all the arguments are predicted in turn. All the parameters are updated using stochastic gradient descent. This model is first trained using the automatic parsing result on a raw corpus, and in performing PAS analysis described in Section 4.2, the score derived from this model is used as a feature. 4.2 Neural Network-Based Score Calculation In the base model, the score for an edge (local score) or an edge pair (global score) is calculated using the dot product of a sparse high-dimensional feature vector with a model parameter, as shown in Equation (3). In our proposed model, these scores are calculated in a standard neural network with one hidden layer, as shown in Figure 5. We first describe the calculation of the local score scorel(x, e). A predicate p and an argument a are represented by embeddings (a d dimensional vector) vp and va ∈Rd, and vfl ∈Rdf (df represents a dimensional of vfl) represents a feature vector obtained by concatenating the case role between a and p, the argument prediction score obtained from the model described in Section 4.1, and the other atomic features. An input layer is a concatenation of these vectors, and then, a hidden layer hl ∈Rdh (dh represents a dimension of the hidden layer) is calculated as follows: hl = f(W 1 l [vp; va; vfl]) (5) where f is an element-wise activation function (tanh is used in our experiments), and W 1 l ∈ Rdh(2d+dh) is a weight matrix (for the local score) from the input layer to the hidden layer. The scalar score in an output layer is then calculated as follows: scorel(x, e) = f(W 2 l hl) (6) where W 2 l ∈R(2d+dh)·1 is a weight matrix (for the local score) from the hidden layer to the output layer. By calculating the score in this way, all the 1239 !"#$%#&'#$!(")%#&'#$! *+,(*%-,+"#! )*+'(*%-,+"#! W 1 l W 2 l scorel(x, e) !"#$%#&'#$! (")%#&'#$! scoreg(x, ei, ej) W 1 g W 2 g ,(-#! ,(-#.! ,(-#/! (")0%!"#$%% -,+"#! +12#"% 3#(14"#-! +12#"% 3#(14"#-! vp va vfl vpi vpj vai vaj vfg Figure 5: A score calculation in our proposed neural-network based model. The left part and right part represent a local and global score calculation. combinations of features in the input layer can be considered. Next we describe the calculation of the global score scoreg(x, ei, ej). In the base model, the two types of global features are utilized: one is for the two predicates having different arguments, and the other is for the two predicates sharing the same argument. The input layer is a concatenation of involving vectors of predicates/arguments and the other features vfg. For example, when calculating the global score for the two predicates having different arguments, the input layer is a concatenation of the vectors of two predicates and two arguments and vfg. A hidden layer hg is calculated as follows: hg = f(W 1 g [vpi; vpj; vai; vaj; vfg]) (7) where W 1 g is a weight matrix (for the global score) from the input layer to the hidden layer, vpi and vai are the embeddings of the predicate/argument connected by ei, and vpj and vaj are defined in the same way. The scalar score in an output layer is then calculated as follows: scoreg(x, ei, ej) = f(W 2 g hg) (8) where W 2 g is a weight matrix (for the global score) from the hidden layer to the output layer. 4.3 Inference and Training While inference is the same as the base model, training is slightly different. In our proposed model, the model parameter θ consists of the embeddings of predicates/arguments and weight matrices for the local/global score in the neural networks. Our objective is to minimize the following loss function: case # of dep arguments # of zero arguments total NOM 1,402 1,431 2,833 ACC 278 113 391 DAT 92 287 379 ALL 1,772 1,831 3,603 Table 1: Test set statistics of the number of arguments. J(θ) = N ∑ k lk(θ), (9) where lk(θ) = max yk∈G(x)(Score(xk, yk; θ)−Score(xk, ˆyk; θ) + ||yk −ˆyk||1), (10) and ||yk −ˆyk||1 denotes the Hamming distance between the gold PA graph ˆyk and a candidate PA graph yk. Stochastic gradient descent is used for parameter inference. Derivatives with respect to parameters are taken using backpropagation. Adam (Kingma and Ba, 2014) is adopted as the optimizer. For initialization of the embeddings of a predicate/argument, the embeddings of the predicate/argument trained by the method described in Section 4.1 are utilized. The weight matrices are randomly initialized. 5 Experiment 5.1 Experimental Setting The KWDLC (Kyoto University Web Document Leads Corpus) evaluation set (Hangyo et al., 2012) was used for our experiments, because it contains 1240 a wide variety of Web documents, such as news articles and blogs. This evaluation set consists of the first three sentences of 5,000 Web documents. Morphology, named entities, dependencies, PASs, and coreferences were manually annotated. This evaluation set was divided into 3,694 documents (11,558 sents.) for training, 512 documents (1,585 sents.) for development, and 700 documents (2,195 sents.) for testing. Table 1 shows the statistics of the number of arguments in the test set. While “dep argument” means that the argument and a predicate have a dependency relation, but a specified case marking postposition is hidden (corresponds to “dashed line” in Section 3.1), “zero argument” means that the argument and a predicate do not have a dependency relation (corresponds to “dotted line” in Section 3.1). Since we want to focus on the accuracy of case analysis and zero anaphora resolution, gold morphological analysis, dependency analysis, and named entities were used. The sentences having a predicate that takes multiple arguments in the same case role were excluded from training and test examples, since the base model cannot handle this phenomena (it assumes that each predicate has only one argument with one case role). For example, the following sentence, (5) そんな such 面白ネタ funny-material 満載な full 日々を daily life-ACC 絵と共に picture-with お届けします。, report (I report my daily life full of such funny materials along with pictures.) where the predicate “お届けします” (report) takes both “日々” (daily life) and “絵” (picture) as ACC case arguments, was excluded from training and testing. About 200 sentences (corresponding to about 1.5% of the whole evaluation set) were excluded. In this evaluation set, zero exophora, which is a phenomenon that a referent does not appear in a document, is annotated. Among five types of zero exophora, the two major types, “author” and “reader,” are adopted, and the others are discarded. To consider “author” and “reader” as a referent, the two special nodes, AUTHOR and READER, are added as well as a NULL node in a PA graph of the base model. When the argument predication score is calculated for “author” or “reader,” because its lemma does not appear in a document, for each noun in the following noun list of “author”/“reader” (Hangyo et al., 2013), the argument prediction score is calculated, and the maximum score is used as a feature. • author: “私” (I), “我々” (we), “僕” (I), “弊 社” (our company), · · · • reader: “あなた” (you), “客” (customer), “ 君” (you), “皆様”(you all), · · · In the argument prediction model training described in Section 4.1, a Japanese Web corpus consisting of 10M sentences was used. We preformed syntactic parsing with a publicly available Japanese parser, KNP4. The number of negative samples was 5, and the number of epochs was 10. In the model training described in Section 4.3, the dimensions of both embeddings for predicates/arguments and hidden layer were set to 100. The number of epochs was set to 20, following the base model. 5.2 Result We compared the following three methods: • Baseline (Ouchi et al., 2015) • Proposed model w/o arg. prediction score: in the PAS analysis model, the feature derived from the argument prediction model was not utilized. The embeddings of a predicate/argument were randomly initialized. This method corresponds to adopting the NN-based score calculation in the base model. • Proposed model w/ arg. prediction score: the feature derived from the argument prediction model was utilized, and the embeddings of a predicate/argument were initialized with those obtained in the argument prediction model learning. The performances of case analysis and zero anaphora resolution were evaluated by microaveraged precision, recall, and F-measure. The precision, recall, and F-measure were averaged 4http://nlp.ist.i.kyoto-u.ac.jp/index.php?KNP 1241 case method case analysis zero anaphora P R F P R F NOM Baseline 0.880 0.868 0.874 0.693 0.377 0.488 Proposed model w/o arg. prediction score 0.927 0.917 0.922 0.559 0.532 0.545 Proposed model w arg. prediction score 0.946 0.936 0.941 0.568 0.586 0.577 ACC Baseline 0.433 0.374 0.402 0.000 0.000 0.000 Proposed model w/o arg. prediction score 0.805 0.553 0.656 0.151 0.060 0.085 Proposed model w/ arg. prediction score 0.890 0.658 0.756 0.297 0.124 0.173 DAT Baseline 0.224 0.359 0.276 0.531 0.059 0.107 Proposed model w/o arg. prediction score 0.512 0.104 0.173 0.535 0.242 0.332 Proposed model w/ arg. prediction score 0.834 0.185 0.300 0.622 0.273 0.378 ALL Baseline 0.765 0.764 0.765 0.686 0.304 0.421 Proposed model w/o arg. prediction score 0.908 0.818 0.860 0.544 0.458 0.497 Proposed model w/ arg. prediction score 0.937 0.853 0.893 0.563 0.509 0.534 Table 2: Experimental results on the KWDLC corpus. over 5 runs. Table 2 shows our experimental results. Our proposed method outperformed the baseline method by about 11 absolute points in F-measure. The comparison of “Proposed model w/o arg. prediction score” with the baseline showed that the neural network-based approach was effective, and the comparison of “Proposed model w/ arg. prediction score” with “Proposed model w/o arg. prediction score” showed that our arg. prediction model was also effective. The following is improved by adding an argument prediction score. (6) 久しぶりに after a long time パートですけど、 part-time job 働き始めて begin to work 新しい new 一歩を step-ACC 踏み出しました。 step forward (It’s my first part-time job in a long time. I begin to work, and make a new step.) While in the base model, the NOM arguments of the predicate “働き始める” (begin to work) and “踏み出す” (step forward) were wrongly classified as NULL, by adding an argument prediction score, they were correctly identified as “author.” The phenomenon “case disappearance” occurs in other languages such as Korean, and the phenomenon “argument omission” occurs in other languages such as Korean, Hindi, Chinese, and Spanish. We believe that our neural network approach to the argument prediction and the calculation of the local and global scores is also effective for such languages. 5.3 Error Analysis Errors in our proposed model are listed below: • Recall for ACC and DAT in both case analysis and zero anaphora resolution is low. One reason is that since the number of the ACC and DAT arguments is smaller than that of the NOM argument, the system tends to assign the ACC and DAT arguments with NULL. Another reason is that since this paper focuses on intra-sentential zero anaphora, the NULL arguments include arguments that appear in previous sentences as well as the case where a predicate takes no argument, which makes the training for NULL arguments difficult. We are planing to tackle with intersentential zero anaphora resolution. • The distinction of “author” from NULL fails. (7) 肉を meat-ACC 焼くだけが roast-only-NOM BBQ じゃない! BBQ-(COPULA) (Roasting meat isn’t all in BBQ!) Although the NOM argument of the predicate “焼く” (roast) is “author,” our proposed model wrongly classified it as NULL. Hangyo et al. (2013) identify mentions referring to an author or reader in a document, and utilize this result in the zero anaphora resolu1242 tion. We plan to incorporate the author/reader identification into our model. 6 Conclusion In this paper we presented a novel model for Japanese PAS analysis based on neural network framework. We learned selectional preferences from a large raw corpus, and incorporated them into a PAS analysis model, which considers the consistency of all PASs in a given sentence. In our experiments, we demonstrated that the proposed PAS analysis model significantly outperformed the base SOTA model. In the future, we plan to extend our model to incorporate coreference resolution and intersentential zero anaphora resolution. Acknowledgments This work is supported by CREST, Japan Science and Technology Agency. References Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 349– 359, Lisbon, Portugal, September. Association for Computational Linguistics. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750, Doha, Qatar, October. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10, EMNLP ’02, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343, Beijing, China, July. Association for Computational Linguistics. Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2012. Building a diverse document leads corpus annotated with semantic relations. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, pages 535–544, Bali,Indonesia, November. Faculty of Computer Science, Universitas Indonesia. Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2013. Japanese zero reference resolution considering exophora and author/reader mentions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 924–934, Seattle, Washington, USA, October. Association for Computational Linguistics. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2014. Jointly learning word representations and composition functions using predicate-argument structures. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1544–1555, Doha, Qatar, October. Association for Computational Linguistics. Yuta Hayashibe, Mamoru Komachi, and Yuji Matsumoto. 2011. Japanese predicate argument structure analysis exploiting argument position and type. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 201–209, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, JongHoon Oh, and Julien Kloetzer. 2015. Intrasentential zero anaphora resolution using subject sharing recognition. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2179–2189, Lisbon, Portugal, September. Association for Computational Linguistics. Kenji Imamura, Kuniko Saito, and Tomoko Izumi. 2009. Discriminative approach to predicateargument structure analysis with zero-anaphora resolution. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 85–88, Suntec, Singapore, August. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Hiroki Ouchi, Hiroyuki Shindo, Kevin Duh, and Yuji Matsumoto. 2015. Joint case argument identification for Japanese predicate argument structure analysis. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 1243 pages 961–970, Beijing, China, July. Association for Computational Linguistics. Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to Japanese zero anaphora resolution with large-scale lexicalized case frames. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 758–766, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Hirotoshi Taira, Sanae Fujita, and Masaaki Nagata. 2008. A Japanese predicate argument structure analysis using decision lists. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 523–532, Honolulu, Hawaii, October. Association for Computational Linguistics. Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, pages 896–903, New York, NY, USA. ACM. Ivan Titov and Ehsan Khoddam. 2015. Unsupervised induction of semantic roles within a reconstructionerror minimization framework. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1–10, Denver, Colorado, May–June. Association for Computational Linguistics. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 323–333, Beijing, China, July. Association for Computational Linguistics. Yuan Zhang, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2014. Greed is good if randomized: New inference for dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1013– 1024, Doha, Qatar, October. Association for Computational Linguistics. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1127–1137, Beijing, China, July. Association for Computational Linguistics. 1244
2016
117
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1245–1255, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Addressing Limited Data for Textual Entailment Across Domains Chaitanya Shivade∗Preethi Raghavan† and Siddharth Patwardhan† ∗Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210 [email protected] †IBM T. J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598 {praghav,siddharth}@us.ibm.com Abstract We seek to address the lack of labeled data (and high cost of annotation) for textual entailment in some domains. To that end, we first create (for experimental purposes) an entailment dataset for the clinical domain, and a highly competitive supervised entailment system, ENT, that is effective (out of the box) on two domains. We then explore self-training and active learning strategies to address the lack of labeled data. With self-training, we successfully exploit unlabeled data to improve over ENT by 15% F-score on the newswire domain, and 13% F-score on clinical data. On the other hand, our active learning experiments demonstrate that we can match (and even beat) ENT using only 6.6% of the training data in the clinical domain, and only 5.8% of the training data in the newswire domain. 1 Introduction Textual entailment is the task of automatically determining whether a natural language hypothesis can be inferred from a given piece of natural language text. The RTE challenges (Bentivogli et al., 2009; Bentivogli et al., 2011) have spurred considerable research in textual entailment over newswire data. This, along with the availability of large-scale datasets labeled with entailment information (Bowman et al., 2015), has resulted in a variety of approaches for textual entailment recognition. ∗This work was conducted during an internship at IBM A variation of this task, dubbed textual entailment search, has been the focus of RTE-5 and subsequent challenges, where the goal is to find all sentences in a corpus that entail a given hypothesis. The mindshare created by those challenges and the availability of the datasets has spurred many creative solutions to this problem. However, the evaluations have been restricted primarily to these datasets, which are in the newswire domain. Thus, much of the existing state-of-the-art research has focused on solutions that are effective in this domain. It is easy to see though, that entailment search has potential applications in other domains too. For instance, in the clinical domain we imagine entailment search can be applied for clinical trial matching as one example. Inclusion criteria for a clinical trial (for e.g., patient is a smoker) become the hypotheses, and the patient’s electronic health records are the text for entailment search. Clearly, an effective textual entailment search system could possibly one day fully automate clinical trial matching. Developing an entailment system that works well in the clinical domain and, thus, automates this matching process, requires lots of labeled data, which is extremely scant in the clinical domain. Generating such a dataset is tedious and costly, primarily because it requires medical domain expertise. Moreover, there are always privacy concerns in releasing such a dataset to the community. Taking this into consideration, we investigate the problem of textual entailment in a low-resource setting. We begin by creating a dataset in the clinical domain, and a supervised entailment system that 1245 is competitive on multiple domains – newswire as well as clinical. We then present our work on selftraining and active learning to address the lack of a large-scale labeled dataset. Our self-training system results in significant gains in performance on clinical (+13% F-score) and on newswire (+15% F-score) data. Further, we show that active learning with uncertainty sampling reduces the number of required annotations for the entailment search task by more than 90% in both domains. 2 Related work Recognizing Textual Entailment (RTE) shared tasks (Dagan et al., 2013) conducted annually from 2006 up until 2011 have been the primary drivers of textual entailment research in recent years. Initially the task was defined as that of entailment recognition. RTE-5 (Bentivogli et al., 2009) then introduced the task of entailment search as a pilot. Subsequently, RTE-6 (Bentivogli et al., 2010) and RTE-7 (Bentivogli et al., 2011) featured entailment search as the primary task, but constrained the search space to only those candidate sentences that were first retrieved by Lucene, an open source search engine1. Based on the 80% recall from Lucene in RTE-5, the organizers of RTE-6 and RTE-7 deemed this filter to be an appropriate compromise between the size of the search space and the cost and complexity of the human annotation task. Annotating data for these tasks has remained a challenge since they were defined in the RTE challenges. Successful approaches for entailment (Mirkin et al., 2009; Jia et al., 2010; Tsuchida and Ishikawa, 2011) have relied on annotated data to either train classifiers, or to develop rules for detecting entailing sentences. Operating under the assumption that more labeled data would improve system performance, some researchers have sought to augment their training data with automatically or semi-automatically obtained labeled pairs (Burger and Ferro, 2005; Hickl et al., 2006; Hickl and Bensley, 2007; Zanzotto and Pennacchiotti, 2010; Celikyilmaz et al., 2009). Burger and Ferro (2005) automatically create an entailment recognition corpus using the news headline and the first paragraph of a news article as near-paraphrases. Their approach has an estimated accuracy of 70% on a held out set of 500 pairs. The primary limitation of the approach is that it 1http://lucene.apache.org only generates positive training examples. Hickl et al. (2006) improves upon this work by including negative examples selected using heuristic rules (e.g., sentences connected by although, otherwise, and but). On RTE-2 their method achieves accuracy improvements of upto 10%. However, Hickl and Bensley (2007) achieves only a 1% accuracy improvement on RTE-3 using the same method, suggesting that it is not always as beneficial. Recent work by Bowman et al. (2015) describes a method for generating large scale annotated datasets, viz., the Stanford Natural Language Inference (SNLI) Corpus, for the problem of entailment recognition. They use Amazon Mechanical Turk to very inexpensively produce a large entailment annotated data set from image captions. Zanzotto and Pennacchiotti (2010) create an entailment corpus using Wikipedia data. They handannotate original Wikipedia entries, and their associated revisions for entailment recognition. Using a previously published system for RTE (Zanzotto and Moschitti, 2006), they show that their expanded corpus does not result in improvement for RTE-1, RTE-2 or RTE-3. Similarly, Celikyilmaz et al. (2009) address the lack of labeled data by semi-automatically creating an entailment corpus, which they use within their question answering system. They reuse texthypothesis pairs from RTE challenges in addition to manually annotated pairs from a newswire corpus (with pairs for annotation obtained through a Lucene search over the corpus). Note that all of the above research on expanding the labeled data for entailment has focused on entailment recognition. Our focus in this paper is on improving entailment search by exploiting unlabeled data with self-training and active learning. 3 Datasets In this section, we describe the data sets from two domains, newswire and clinical, that we use in the development and evaluation of our work. 3.1 Newswire Domain For the newswire domain, we use entailment search data from the PASCAL RTE-5, RTE-6 and RTE-7 challenges (Bentivogli et al., 2009; Bentivogli et al., 2010; Bentivogli et al., 2011). The dataset consists of a corpus of news documents, along with a set of hypotheses. The hypotheses come from a separate summarization task, where 1246 Dataset Size Entailing Newswire-train 20,104 810 (4.0%) Newswire-dev 35,927 1,842 (5.1%) Newswire-test 17,280 800 (4.6%) Newswire-unlabeled 43,485 Clinical-train 7,026 293 (4.1%) Clinical-dev 8,092 324 (4.0%) Clinical-test 10,466 596 (5.6%) Clinical-unlabeled 623,600 Table 1: Summary of datasets **NAME[XX (YY) ZZ] has no liver problems. PAST MEDICAL HISTORY 1. Htn Well controlled 2. Diabetes mellitus On regular dose of insulin. FAMILY HISTORY: Father with T2DM age unknown Figure 1: Excerpt from a sample clinical note the summary sentences about a news story (given a topic) were manually created by human annotators. These summary sentences are used as hypotheses in the dataset. Entailment annotations are then provided for a subset of sentences from the document corpus, based on a Lucene filter for each hypothesis. In this work, we use the RTE-5 development data to train our system (Newswire-train), RTE-5 test data for evaluation of our systems (Newswiretest), and we use the combined RTE-6 development and test data for our system development and parameter estimation (Newswire-dev). We use all of the development and test data from RTE-7, without the human annotation labels, as our unlabeled data (Newswire-unlabeled) for self-training and active learning experiments. A summary of the newswire data is shown in Table 1. 3.2 Clinical Domain There are no public datasets available for textual entailment search in the clinical domain. In creating this dataset, we imagine a real-world clinical situation where hypotheses are facts about a patient that a physician seeing the patient might want to learn (e.g., The patient underwent a surgical procedure within the last three months.). The unstructured notes in the patients electronic medical record (EMR) is the text against which a system would determine the entailment status of the given hypotheses. Observe that the aforementioned real-world clinical scenario is very closely related to a question answering problem, where instead of hypotheses a physician may pose natural language questions seeking information about the patient (e.g., Has this patient undergone a surgical procedure within the past three months?). Answers to such questions are words, phrases or passages from the patient’s EMR. Since we have access to a patient-specific question answering dataset over EMRs2 (henceforth, referred to as the QA dataset), we use it here as our starting point in constructing the clinical domain textual entailment dataset. Given a question answering dataset, how might one go about creating a dataset on textual entailment? We follow a methodology similar to that of RTE-1 through RTE-5 for entailment set derived from question answering data. The text corpus in our entailment dataset is the set of de-identified patient records associated with the QA dataset. To generate hypotheses, human annotators converted questions into multiple assertive sentences, which is somewhat similar to what was done in the first five RTE challenges (RTE-1 through RTE-5). For a given question, the human annotators plugged in clinically-plausible answers to convert the question into a statement that may or may not be true about a given patient. Table 2 shows example hypotheses and their source questions. Note that this procedure for hypothesis generation diverges slightly from the RTE procedure, where answers from a question answering system were plugged into the questions to produce assertive sentences. To generate entailment annotations, we paired a hypothesis with every sentence in a subset of clinical notes of the EHR, and asked human annotators to determine if the note sentence enabled them to conclude an entailment relationship with the hypothesis. For example, the text: “The appearance is felt to be classic for early MS.” entails the hypothesis: “She has multiple sclerosis”. While in the RTE procedure, a Lucene search was used as a filter to limit the number of hypothesis-sentence pairs that are annotated, in our clinical dataset we 2a publication describing the question-answering dataset is currently under review at another venue 1247 Question Hypotheses When was the patient diagnosed with dermatomyositis? The patient was diagnosed with dermatomyositis two years ago. Any creatinine elevation? Creatinine is elevated. Creatinine is normal. Why were xrays done on the forearm and hand? Xrays were done on the forearm and hand for suspected fracture. Table 2: Example question →hypotheses mappings limit the number of annotations by pairing each hypothesis only with sentences from EMR notes containing an answer to the original question in the QA dataset. The entailment annotations were generated by two medical students with the help of the annotations generated for QA. 11 medical students created our QA dataset of 5696 questions over 71 patient records, of which 1747 questions have corresponding answers. This was generated intermittently over a period of 11 months. Given the QA dataset, the time taken to generate entailment annotations includes conversion of questions to hypotheses, and annotating entailment. While conversion of questions to hypotheses took approx. 2 hours for 20 questions, generating about 3000 hypothesis and text pairs took approx. 16 hours. At the end of this process, we had a total of 243 hypotheses annotated against sentences from 380 clinical notes, to generate 25,584 text-hypothesis pairs. We split this into train, development and test sets, summarized in Table 1. Although we have a fairly limited number of labeled text-hypothesis pairs, we do have a large number of patient health records (besides the ones in the annotated set). We generated unlabeled data in the clinical domain, by pairing the hypotheses from our training data with sentences from a set of randomly sampled subset of health records outside of the annotated data. Datasets for the textual entailment search task are highly skewed towards the non-entailment class. Note that our clinical data, while smaller in size than the newswire data, maintains a similar class imbalance. 4 Supervised Entailment System We begin by defining, in this section, our supervised entailment system (called ENT) that is used as the basis of our self-training and active learning experiments. Our system draws upon characteristics and features of systems that have previously been successful in the RTE challenges in the newswire domain. We further enhance this system with new features targeting the clinical domain. The purpose of this section is to demonstrate, through an experimental comparison with other entailment systems, that ENT is competitive on both domains, and is a reasonable supervised system to use in our investigations into selftraining and active learning. 4.1 System Description Top systems (Tsuchida and Ishikawa, 2011; Mirkin et al., 2009) in the RTE challenges have used various types of passage matching approaches in combination with machine learning for entailment. We follow along these lines, and design a classifier-based entailment system. For every text-hypothesis pair in the dataset we extract a feature vector representative of that pair. Then, using the training data, we train a classifier to make entailment decisions on unseen examples. In our system, we employ a logistic regression with ridge estimator (the Weka implementation (Hall et al., 2009)), powered by a variety of passage matching features described below. Underlying many of our passage match features is a more fine-grained notion of “term match”. Term matchers are a set of algorithms that attempt to match tokens (including multi-word tokens, such as New York or heart attack) across a pair of passages. One of the simplest examples of these is exact string matcher. A token in one text passage that matches exactly, characterfor-character, with a token in another text passage would be considered a term match by this simple term matcher. However, these term matchers could be more sophisticated and match pairs of terms that are synonyms, or paraphrases, or 1248 Exact String match, ignore case Multi-word Overlapping terms in multi-word token Head String match head of multi-word token Wikipedia Wikipedia redirects and disamb. pages Morphology Derivational morphology, e.g. archaeological →archaeology Date+Time Match normalized dates and times Verb resource Match verbs using WordNet, Moby thesaurus, manual resources UMLS Medical concept match using UMLS Translation Affix-rule-based translation of medical terms to layman terms Table 3: ENT term matchers equivalent to one another according to other criteria. ENT employs a series of term matchers listed in Table 3. Each of these may also produce a confidence score for every match they find. Because we are working with clinical data, we added some medical domain term matchers as well – using UMLS (Bodenreider, 2004) and a rule-based “translator” of medical terms to layman terms3. Listed below are all of our features used in the ENT’s classifier. Most passage match features aggregate the output of the term matchers along various linguistic dimensions – lexical, syntactic, semantic, and document/passage characteristics. Lexical: This set includes a feature aggregating exact string matches across text-hypothesis, one aggregating all term matchers, a feature counting skip-bigram matches (using all matchers), a measure of matched term coverage of text (ratio of matched terms to unmatched terms). Additionally, we have some medical domain features, viz. UMLS concept overlap, and a measure of UMLS-based similarity (Shivade et al., 2015; Pedersen et al., 2007) using the UMLS::Similarity tool (McInnes et al., 2009). Syntactic: Following the lead of several approaches textual entailment (Wang and Zhang, 2009; Mirkin et al., 2009; Kouylekov and Negri, 2010) we have a features measuring the similarity of parse trees. Our rule-based syntactic parser (McCord, 1989) produces dependency parses the text-hypothesis pair, whose nodes are aligned using all of the term matchers. The tree match feature is an aggregation of the aligned subgraphs in the tree (somewhat similar to a tree kernel (Moschitti, 2004)). 3Rules for medical term translator were derived from http://www.globalrph.com/medterm.htm Semantic: We apply open domain as well as medical entity and relation detectors (Wang et al., 2011; Wang et al., 2012) to the texts, and post features measuring overlap in detected entities and overlap in the detected relations across the text-hypothesis pair. We also have a rule-based semantic frame detector for a “medical finding” frame (patient presenting with symptom or disease). We post a feature that aggregates matched elements of detected frames. Passage Characteristics: Clinical notes typically have a structure and the content is often organized in sections (e.g. History of Illness followed by Physical Examination and ending with Assessment and Plan). We identified the section in which each note sentence was located and used them as features in the classifier. Clinical notes are also classified into many different categories (e.g., discharge summary, radiology report, etc.), which we generate features from. We also generate several features capturing the “readability” of the text segments – parse failure, list detector, number of verbs, word capitalization, no punctuation and sentence size. We also have a measure of passage topic relevance based on medical concepts in the pair of texts. 4.2 System Performance To compare effectiveness of ENT on the entailment task, we chose two publicly available systems – EDITS and TIE – for comparison. Both these system are available under the Excitement Open Platform (EOP), an initiative (Magnini et al., 2014) to make tools for textual entailment freely available4 to the NLP community. EDITS (Edit Distance Textual Entailment Suite) by Kouylekov and Negri (2010) is an open source textual entailment system that uses a set of rules and resources to perform “edit” operations on the text to convert it into the hypothesis. There are costs associated with the operations, and an overall cost is computed for the text-hypothesis pair, which determines the decision for that pair. This system has placed third (out of eight teams) in RTE5, and seventh (out of thirteen teams) in RTE-7. The Textual Inference Engine (TIE) (Wang and Zhang, 2009) is a maximum entropy based entailment system relying on predicate argument structure matching. While this system did not partici4http://hltfbk.github.io/ Excitement-Open-Platform/ 1249 Newswire Clinical System Precision Recall F-score Precision Recall F-score Lucene 0.47 0.48 0.47∗ 0.16 0.22 0.19 EDITS 0.22 0.57 0.32 0.23 0.21 0.20 TIE 0.66 0.21 0.31 0.43 0.01 0.02 ENT 0.77 0.26 0.39 0.42 0.15 0.23∗ Table 4: System performance on test data (* indicates statistical significance) pate in the RTE challenges, it has been shown to be effective on the RTE datasets. In our experiments, we trained the EDITS system optimizing for Fscore (the default optimization criterion is accuracy) and TIE with its default settings. We also used a Lucene baseline similar to the one used in RTE-5, RTE-6 and RTE-7 entailment challenges. We trained the systems on the training set of each domain and tested on the test set. The Lucene baseline considers the first N sentences (where N is 5, 10, 15 or 20) top-ranked by the search engine to be entailing the hypothesis. The configuration with the top 10 sentences performed the best, and is reported in the results. Note that this baseline is a strong one, and none of the systems participating in RTE-5 could beat it. Table 4 summarizes the system performance on newswire and clinical data. We observe that systems that did well on RTE datasets, were mediocre on the clinical dataset. We did not, however, put any effort into adaption of TIE and EDITS to the clinical data. So the mediocre performance on clinical is understandable. It is interesting to see though that ENT did well (comparatively) on both domains. We note that our problem setting is most similar to the RTE-5 entailment search task. Of the 20 runs across eight teams that participated in RTE-5, the median F-Score was 0.30 and the best system (Mirkin et al., 2009) achieved an F-Score of 0.46. EDITS and TIE perform slightly above the median and ENT (with 0.39 F-score) would have ranked third in the challenge. The performance of all systems on the clinical data is noticeably low as compared to the newswire data. An obvious difference in the two domains is the training data size (see Table 1). However, obtaining annotations for textual entailment search is expensive, particularly in the clinical domain. The remaining sections present our investigations into self-training and active learning, to overcome the lack of training data. 5 Self-Training Our goal is to exploit unlabeled data, with the hope of augmenting the limited annotated data in a given domain. Self-training is a method that has been successfully used to address limited training data on many NLP tasks, such as parsing (McClosky et al., 2006), information extraction (Huang and Riloff, 2012; Patwardhan and Riloff, 2007), word sense disambiguation (Mihalcea, 2004), etc. Self-training iteratively increases the size of the training set, by automatically assigning labels to unlabeled examples, using a model trained in a previous iteration of the self-training regime. For our newswire and clinical datasets, using the set of unlabeled text-hypothesis pairs U, we ran the following training regime: A model was created using the training data Ln, and applied it to the unlabeled data U. From U, all such pairs that were classified by the model as entailing pairs with high confidence (above a threshold τ) were added to the labeled training data Ln to generate Ln+1. Non-entailing pairs were ignored. A new model is trained on data Ln+1, and the above process repeated iteratively, until a stopping criteria is reached (in our case, all pairs from U are exhausted). The threshold τ determines the confidence of our model for a text-hypothesis pair being classified to the entailment class. This threshold was tuned by varying it incrementally from 0.1 to 0.9 in steps of 0.1. The best τ was determined on the development set, and chosen for the self-training system. Figure 2 shows the effect of τ on the development data. As such, we see that the F-score of the selftrained model is always above that of the baseline ENT system. The F-score increases upto a peak of 0.33 at threshold τ of 0.2 before dropping at higher thresholds. Using this tuned threshold on test set, the comparitive performance on the test set is outlined in Table 5. We observe an F-score 1250 (a) Precision and recall for clinical data (b) F-Score for clinical data (c) Precision and recall for newswire data (d) F-Score for newswire data Figure 2: Self-training on development data Newswire Clinical System Precision Recall F-score Precision Recall F-score ENT 0.77 0.26 0.39 0.42 0.15 0.23 ENT + Self-Training 0.62 0.48 0.54∗ 0.34 0.39 0.36∗ Table 5: Self-training results on test data (* indicates statistical significance) of 0.36, which is significantly greater than that of the vanilla ENT system (0.23). The effect of the threshold on performance correlates with the number of instances added to the training set. When the threshold is low, there are more instances being added (10,799 at threshold of 0.1) into the training set. Therefore, recall is likely to benefit, since the model is exposed to a larger variety of text-hypothesis pairs. However, the precision is low since noisy pairs are likely to be added. When the threshold is high, fewer instances are added (316 at threshold of 0.9). These are the ones that the model is most certain about, suggesting that these are likely to be less noisy. Therefore, the precision is comparatively high. We also ran our self-training approach on the Newswire datasets. We observed similar variations in performance with newswire data as with the clinical data. At threshold of 0.9, fewer instances (49) are added to the training set from the unlabeled data, while a large number of instances (2,861) are added at a lower threshold τ of 0.1. The best performance (F-score of 0.52) was obtained at threshold of 0.3, on the development set. This threshold also resulted in the best performance (0.54) on the test set. Similar to the clinical domain, precision increased but recall decreased as the threshold increased. Again, it is evident from Table 5 that gains obtained from self-training are due to recall. It should be noted that the selftrained system achieves an F-score of 0.54 – substantially better than the best performing system of Mirkin et al. (2009) (F-score, 0.46) in RTE-5. 6 Active Learning Active learning is a popular training paradigm in machine learning (Settles, 2012) where a learning agent interacts with its environment in acquiring a training set, rather than passively receiving independent samples from an underlying distribution. This is especially pertinent in the clinical domain, where input from a medical professional should be sought only when really necessary, because of the high cost of such input. The purpose of exploring 1251 (a) Newswire Data (b) Clinical Data Figure 3: Learning curves for uncertainty sampling and random sampling on test data this paradigm is to achieve the best possible generalization performance at the lowest cost. Active learning is an iterative process, and typically works as follows: a model M is trained using a minimal training dataset L. A query framework is used to identify an instance from an unlabeled set U that, if added to L, will result in maximum expected benefit. Gold standard annotations are obtained for this instance and added to the original training set L to generate a new training set L′. In the next iteration, a new model M′ is trained using L′ and used to identify the next most beneficial instance for the training set L′. This is repeated until a stopping criterion is met. This approach is often simulated using a training dataset L of reasonable size. The initial model M is created using a subset A of L. Further, instead of querying a large unlabeled set U, the remaining training data (L −A) is treated as an unlabeled dataset and queried for the most beneficial addition. We carried out active learning in this setting using a querying framework known as uncertainty sampling (Lewis and Gale, 1994). Here, the model M trained using A, queries the instances in (L−A) for instance(s) it is least certain for a prediction label. For probabilistic classifiers the most uncertain instance is the one where posterior probability for a given class is nearest to 0.5. To estimate the effectiveness of this framework, it is always compared with a random sampling framework, where random instances from the training data are incrementally added to the model. Starting with a model trained using a single randomly chosen instance, we carried out active learning using uncertainty sampling, adding one instance at a time. After the addition of each instance, the model was retrained and tested on a held out set. To minimize the effect of randomization associated with the first instance, we repeated the experiment ten times and averaged the performance scores across the ten runs. Following previous work (Settles and Craven, 2008; Reichart et al., 2008) we evaluate active learning using learning curves on the test set. Figure 3 shows the learning curves for newswire and clinical data. On clinical data, uncertainty sampling achieves a performance equal to the baseline ENT with only 470 instances. With random sampling, over 2,200 instances are required. The active learner matches the performance of the ENT with only 6.6% of training data. Newswire shows a similar trend, with both sampling strategies outperforming ENT, using less than half the training, and uncertainty sampling learning faster than random. While uncertainty sampling matches ENT F-score with only 1,169 instances, random sampling requires 2,305. Here, the active learner matches ENT performance using only 5.8% of the training data. 7 Effect of Class Distribution After analyzing our experimental results, we considered that one possible explanation for the improvements over baseline ENT could plausibly be because of changes in the class distribution. From Table 1, we observe that the distribution of classes in both domains is highly skewed (only 4-5% positive instances). Self-training and active learning dramatically change the class distribution in training. To assess the effect of class distribution changes on performance, we ran additional experiments, described here. We first investigated sub-sampling (Japkowicz, 2000) the training data to address class imbalance. This includes down-sampling the majority class or up-sampling the minority class until the classes are 1252 Figure 4: Comparison of SMOTE and selftraining (on newswire development set) balanced. We found no significant gains over the vanilla ENT baseline with both strategies. Specifically, down-sampling resulted in gains of only 0.002 and 0.001 F-score and up-sampling resulted in a drop of 0.011 and 0.013 F-score on clinicaldev and newswire-dev, respectively. Another approach to addressing class imbalance is to apply Synthetic Minority Oversampling Technique (SMOTE) (Chawla et al., 2002). SMOTE creates instances of the minority class by taking a minority class sample and introducing synthetic examples between its k nearest neighbors. Using SMOTE on newswire and clinical datasets resulted in improvements over baseline ENT in both domains. The improvements using self-training, however, are significantly higher than SMOTE. Figure 4 shows a comparison of SMOTE and self-training on newswire data, where equal number of instances are added to the training set by both techniques. Finally, for active learning, we consider random sampling as a competing approach to uncertainty sampling. Figure 5 illustrates the percentage of positive and negative instances that get included in the training set for both sampling strategies, as active learning proceeds. The blue solid line shows that positive instances are consumed faster than the negative instances with uncertainty sampling. Thus, a higher percentage of positive instances (that approximately equals the number of negative instances getting added) get added and this helps maintain a balanced class distribution. Once the positive instances are exhausted, more negative instances are added, resulting in some class imbalance that hurts performance (even though more training data is being added overall). In contrast, random sampling does not change the class balance, as it consumes a proportional number of positive and negative instances (resultFigure 5: Comparison of sampling strategies for active learning (on newswire development set) ing in more negative than positive instances). The plot indicates that when using uncertainty sampling 80% of the positive examples are added to the training set with less than 50% of the data. This also explains how the active learner matches the performance of the model using the entire labeled set, but with fewer training examples. 8 Conclusion We explored the problem of textual entailment search in two domains – newswire and clinical – and focused a spotlight on the cost of obtaining labeled data in certain domains. In the process, we first created an entailment dataset for the clinical domain, and a highly competitive supervised entailment system, called ENT, which is effective (out of the box) on two domains. We then explored two strategies – self-training and active learning – to address the lack of labeled data, and observed some interesting results. Our self-training system substantially improved over ENT, achieving an F-score gain of 15% on newswire and 13% on clinical, using only additional unlabeled data. On the other hand, our active learning experiments demonstrated that we could match (and even beat) the baseline ENT system with only 6.6% of the training data in the clinical domain, and only 5.8% of the training data in the newswire domain. Acknowledgments We thank our in-house medical expert, Jennifer Liang, for guidance on the data annotation task, our medical annotators for annotating clinical data for us, and Murthy Devarakonda for valuable insights during the project. We also thank Eric Fosler-Lussier and Albert M. Lai for their help in conceptualizing this work. 1253 References Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The Fifth PASCAL Recognizing Textual Entailment Challenge. In Proceedings of the Second Text Analysis Conference, Gaithersburg, MD. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2010. The Sixth PASCAL Recognizing Textual Entailment Challenge. In Proceedings of the Third Text Analysis Conference, Gaithersburg, MD. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2011. The Seventh PASCAL Recognizing Textual Entailment Challenge. In Proceedings of the Fourth Text Analysis Conference, Gaithersburg, MD. Olivier Bodenreider. 2004. The Unified Medical Language System (UMLS): Integrating Biomedical Terminology. Nucleic Acids Research, 32(Database Issue):D267–D270. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632– 642, Lisbon, Portugal. John Burger and Lisa Ferro. 2005. Generating an Entailment Corpus from News Headlines. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pages 49–54, Ann Arbor, MI. Asli Celikyilmaz, Marcus Thint, and Zhiheng Huang. 2009. A Graph-based Semi-Supervised Learning for Question-Answering. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 719–727, Singapore. Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16:321–357. Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing Textual Entailment: Models and Applications. Synthesis Lectures on Human Language Technologies, 6(4):1–220. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software. ACM SIGKDD Explorations Newsletter, 11(1):10. Andrew Hickl and Jeremy Bensley. 2007. A Discourse Commitment-based Framework for Recognizing Textual Entailment. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 171–176, Prague, Czech Republic. Andrew Hickl, Jeremy Bensley, John Williams, Kirk Roberts, Bryan Rink, and Ying Shi. 2006. Recognizing Textual Entailment with LCC’s GROUNDHOG System. In The Second PASCAL Recognizing Textual Entailment Challenge: Proceedings of the Challenges Workshop, Venice, Italy. Ruihong Huang and Ellen Riloff. 2012. Bootstrapped Training of Event Extraction Classifiers. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 286–295, Avignon, France. Nathalie Japkowicz. 2000. Learning from Imbalanced Data Sets: A Comparison of Various Strategies. In Proceedings of the AAAI Workshop on Learning from Imbalanced Data Sets, pages 10–15, Austin, TX. Houping Jia, Xiaojiang Huang, Tengfei Ma, Xiaojun Wan, and Jianguo Xiao. 2010. PKUTM Participation at TAC 2010 RTE and Summarization Track. In Proceedings of the Third Text Analysis Conference, Gaithersburg, MD. Milen Kouylekov and Matteo Negri. 2010. An Open-Source Package for Recognizing Textual Entailment. In Proceedings of the ACL 2010 System Demonstrations, Uppsala, Sweden. David D. Lewis and William A. Gale. 1994. A Sequential Algorithm for Training Text Classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3–12, Dublin, Ireland. Bernardo Magnini, Roberto Zanoli, Ido Dagan, Kathrin Eichler, Neumann Guenter, Tae-Gil Noh, Sebastian Pad´o, Asher Stern, and Omer Levy. 2014. The Excitement Open Platform for Textual Inferences. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 43–48, Baltimore, MD. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective Self-Training for Parsing. In Proceedings of Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 152– 159, New York City, NY. Michael McCord. 1989. Slot Grammar: A System for Simpler Construction of Practical Natural Language Grammars. In Proceedings of the International Symposium on Natural Language and Logic, pages 118–145, Hamburg, Germany. Bridget T McInnes, Ted Pedersen, and Serguei V. S. Pakhomov. 2009. UMLS-Interface and UMLSSimilarity : Open Source Software for Measuring Paths and Semantic Similarity. In Proceedings of the Annual Symposium of the American Medical Informatics Association, San Francisco, CA. 1254 Rada Mihalcea. 2004. Co-Training and Self-Training for Word Sense Disambiguation. In Proceedings of the Eighth Conference on Natural Language Learning, pages 33–40, Boston, MA. Shachar Mirkin, Roy Bar-Haim, Jonathan Berant, Ido Dagan, Eyal Shnarch, Asher Stern, and Idan Szpektor. 2009. Addressing Discourse and Document Structure in the RTE Search Task. In Proceedings of the Second Text Analysis Conference, Gaithersburg, MD. Alessandro Moschitti. 2004. A Study on Convolution Kernels for Shallow Statistic Parsing. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 335–342, Barcelona, Spain. Siddharth Patwardhan and Ellen Riloff. 2007. Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 717–727, Prague, Czech Republic. Ted Pedersen, Serguei V. S. Pakhomov, Siddharth Patwardhan, and Christopher G Chute. 2007. Measures of Semantic Similarity and Relatedness in the Biomedical Domain. Journal of Biomedical Informatics, 40(3):288–99. Roi Reichart, Katrin Tomanek, Udo Hahn, and Ari Rappoport. 2008. Multi-Task Active Learning for Linguistic Annotations. In Proceedings of ACL-08: HLT, pages 861–869, Columbus, OH. Burr Settles and Mark Craven. 2008. An Analysis of Active Learning Strategies for Sequence Labeling Tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1070–1079, Honolulu, HI. Burr Settles. 2012. Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 6(1):1–114. Chaitanya Shivade, Courtney Hebert, Marcelo Loptegui, Marie-Catherine de Marneffe, Eric Fosler-Lussier, and Albert M. Lai. 2015. Textual Inference for Eligibility Criteria Resolution in Clinical trials. Journal of Biomedical Informatics, 58:S211–S218. Masaaki Tsuchida and Kai Ishikawa. 2011. IKOMA at TAC2011 : A Method for Recognizing Textual Entailment using Lexical-level and Sentence Structurelevel Features. In Proceedings of the Fourth Text Analysis Conference, Gaithersburg, MD. Rui Wang and Yi Zhang. 2009. Recognizing Textual Relatedness with Predicate-Argument Structures. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 784–792, Singapore. Chang Wang, James Fan, Aditya Kalyanpur, and David Gondek. 2011. Relation Extraction with Relation Topics. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1426–1436, Edinburgh, UK. Chang Wang, Aditya Kalyanpur, James Fan, Branimir K. Boguraev, and David Gondek. 2012. Relation Extraction and Scoring in DeepQA. IBM Journal of Research and Development, 56(3.4):9:1–9:12. Fabio M. Zanzotto and Alessandro Moschitti. 2006. Automatic Learning of Textual Entailments with Cross-Pair Similarities. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 401–408, Sydney, Australia. Fabio M. Zanzotto and Marco Pennacchiotti. 2010. Expanding Textual Entailment Corpora from Wikipedia using Co-training. In Proceedings of the 2nd Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources, pages 28–36, Beijing, China. 1255
2016
118
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1256–1265, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Annotating and Predicting Non-Restrictive Noun Phrase Modifications Gabriel Stanovsky Ido Dagan Computer Science Department, Bar-Ilan University [email protected] [email protected] Abstract The distinction between restrictive and non-restrictive modification in noun phrases is a well studied subject in linguistics. Automatically identifying non-restrictive modifiers can provide NLP applications with shorter, more salient arguments, which were found beneficial by several recent works. While previous work showed that restrictiveness can be annotated with high agreement, no large scale corpus was created, hindering the development of suitable classification algorithms. In this work we devise a novel crowdsourcing annotation methodology, and an accompanying large scale corpus. Then, we present a robust automated system which identifies non-restrictive modifiers, notably improving over prior methods. 1 Introduction Linguistic literature provides a large body of research distinguishing between two types of modifiers within noun phrases: (1) Restrictive modifiers, which constitute an integral part of the entity referenced by the NP, e.g., the underlined modifier in “She wore the necklace that her mother gave her”, versus (2) Non-restrictive modifiers, which provide an additional or parenthetical information on an already definite entity, e.g., “The speaker thanked president Obama who just came into the room” (Huddleston et al., 2002; Fabb, 1990; Umbach, 2006). The distinction between the two types is semantic in nature and relies heavily on the context of the NP. Evidently, many syntactic constructions can appear in both restrictive and non-restrictive uses. While the previous examples were of relative clauses, Figure 1 demonstrates this distinction in various other syntactic constructions. Identifying and removing non-restrictive modifiers yields shorter NP arguments, which proved beneficial in many NLP tasks. In the context of abstractive summarization (Ganesan et al., 2010) or sentence compression (Knight and Marcu, 2002), non-restrictive modifiers can be removed to shorten sentences, while restrictive modification should be preserved. Further, recent work in information extraction showed that shorter arguments can be beneficial for downstream tasks. Angeli et al. (2015) built an Open-IE system which focuses on shorter argument spans, and demonstrated its usefulness in a state-of-the-art Knowledge Base Population system. Stanovsky et al. (2015) compared the performance of several off-the-shelf analyzers in different semantic tasks. Most relevant to this work is the comparison between Open-IE and Semantic Role Labeling (Carreras and M`arquez, 2005). Specifically, they suggest that SRL’s longer arguments introduce noise which hurts performance for downstream tasks. Finally, in question answering, omitting nonrestrictive modification can assist in providing more concise answers, or in matching between multiple answer occurrences. Despite these benefits, there is currently no consistent large scale annotation of restrictiveness, which hinders the development of automatic tools for its classification. In prior art in this field, Dornescu et al. (2014) used trained annotators to mark restrictiveness in a large corpus. Although they reached good agreement levels in restrictiveness annotation, their corpus suffered from inconsistencies, since it conflated restrictiveness annotation with inconsistent modifier span annotation. The contributions of this work are twofold. Primarily, we propose a novel crowdsroucing anno1256 tation methodology which decouples the binary (restrictive / non-restrictive) distinction from the modifier span annotation (Section 3). Following this methodology, in Section 4 we present a large scale annotated corpus, which will allow further research into the automatic identification of nonrestrictive modification. Additionally, we developed a strong automatic classifier, which learns from our new corpus (Section 5). This classifier uses new linguistically motivated features which are robust enough to perform well over automatically predicted parse trees. The corpus and the automatic classifier are both made publicly available.1 While there is still much room for improvement, especially in some of the harder, more context-dependent, cases (most notably, prepositional and adjectival modifiers), our system provides an applicable means for identifying nonrestrictive modification in a realistic NLP setting. 2 Background In this section we cover relevant literature from several domains. In Section 2.1 we discuss the established linguistic distinction between restrictive and non-restrictive modification. Following, in Section 2.2 we discuss previous NLP work on annotating and identifying this distinction. Finally, in Section 2.3 we briefly describe the recent QASRL annotation paradigm (He et al., 2015), which we utilize in Section 3 as part of our annotation scheme. 2.1 Non-Restrictive Modification Throughout the paper we follow Huddleston et al.’s (2002) well-known distinction between two types of NP modifiers: (1) Restrictive modifiers, for which the content of the modifier is an integral part of the meaning of the containing NP, and, in contrast, (2) Non-restrictive modifiers, that present a separate, parenthetical unit of information about the NP. While some syntactic modifiers (such as determiners or genitives) are always restrictive, others are known to appear in both restrictive as well as non-restrictive uses, depending on semantics and context (Huddleston et al., 2002; Fabb, 1990; Umbach, 2006). Among these are relative clauses, adjectival, prepositional, non-finite, and verbal mod1http://www.cs.biu.ac.il/˜nlp/ resources/downloads (RC1) The necklace that her mother gave her+ is in the safe. (RC2) The governor disagreed with the U.S ambassador to China who seemed nervous−. (NF1) People living near the site+ will have to be evacuated. (NF2) sheriff Arthur Lester, standing against the wall−, looked tired. (PP1) The kid from New York+ won the lottery. (PP2) The assassination of Franz Ferdinand from Austria−started WWI. (AD1) The good+ boys won. (AD2) The water level rose a good−12 inches. Figure 1: Restrictive (marked in red and a plus sign) and non-restrictive (marked in blue and a minus sign) examples in different syntactic constructions, see elaboration in Section 2. Examples index: RC - Relative clause, NF - Non-finite clauses (Huddleston et al. [p. 1265]), PP - Prepositional modifiers, AD - Adjectival modifiers (Huddleston et al. [p. 528]). ifiers. See Figure 1 for examples of different syntactic constructions appearing in both restrictive as well as non-restrictive contexts. For example, for relative clause, Huddleston et al. [p. 1058] identifies both restrictive as well as non-restrictive uses (for which they use the terms integrated and supplementary, respectively). In the sentence marked (RC1), the highlighted relative clause is restrictive, distinguishing the necklace being referred to from other necklaces, while in sentence (RC2), the relative clause does not pick an entity from a larger set, but instead presents separate information about an already specified definite entity. 2.2 Non-Restrictive Modification in NLP Syntactic and semantic annotations generally avoid the distinction between restrictive and nonrestrictive modification (referred here as “restrictiveness” annotation). The syntactic annotation of the Penn TreeBank (Marcus et al., 1993) and its common conversion to dependency trees (e.g., (de Marneffe and Manning, 2008)) do not differentiate the cases discussed above, providing the same syntactic structure for the semantically different instances. See Figure 2 for an example. Furthermore, prominent semantic annotations, 1257 such as PropBank (Palmer et al., 2005), AMR (Banarescu et al., 2013), CCG (Hockenmaier and Steedman, 2007), or FrameNet (Baker et al., 1998), also avoid this distinction. For example, PropBank does not differentiate between such modifiers, treating both types of modification as an integral part of an argument NP. Two recent works have focused on automatically identifying non-restrictive modifications. Honnibal et al. (2010) added simple automated restrictiveness annotations to NP-modifiers in the CCGbank (Hockenmaier and Steedman, 2007). Following a writing style and grammar rule, a modifier was judged as non-restrictive if and only if it was preceded by a comma.2 This annotation was not intrinsically evaluated, as it was carried as part of an extrinsic evaluation of a statistical parser. Having similar goals to ours, Dornescu et al. (2014) sets the prior art at annotating and predicting non-restrictive modification. In the annotation phase, each of their trained annotators was asked to (1) Mark spans of words in the sentence as forming an NP modifier, and (2) Mark each span they annotated in (1) as either restrictive or nonrestrictive, and specify its type from a predefined list (e.g., relative clause, adjectival modifier, etc.). Their inter-annotator agreement on the first task (modifier span) was low, reaching pairwise F1 score of only 54.9%, possibly due to problems in the annotation procedure, as acknowledged by the authors. The second part of the annotation achieved better agreement levels, reaching kappa of 0.78 (substantial agreement) for type annotation and 0.51 (moderate agreement) for restrictiveness annotation.3 Following the creation of the annotated dataset, they developed rule based and machine learning classifiers. All of their classifiers performed only at about 47% F1, at least partly due to the inconsistencies in span annotation discussed above. To conclude this survey, although an effort was made by Dorenscu et al. (2014), there is currently no available consistent corpus annotated with nonrestrictive modifiers. 2Notice that this is indeed the case in some of the nonrestrictive examples in Figure 1. 3Note that the agreement for the first task is reported in F1 while the second task is reported in Cohen’s kappa. the boy who entered the room det nsubj rcmod det dobj president Obama who entered the room nn nsubj rcmod det dobj Figure 2: Restrictive (top) and non-restrictive (bottom) NP modifications receive the same representation in dependency trees. See Section 2.2. 2.3 QA-SRL Traditional Semantic Role Labeling (SRL) (Carreras and M`arquez, 2005) is typically perceived as answering argument role questions, such as who, what, to whom, when, or where, regarding a target predicate. For instance, PropBank’s ARG0 for the predicate say answers the question “who said something?”. QA-SRL (He et al., 2015) suggests that answering explicit role questions is an intuitive means to solicit predicate-argument structures from nonexpert annotators. Annotators are presented with a sentence in which a target predicate4 was marked, and are requested to annotate argument role questions, phrased using a restricted grammar, and corresponding answers. For example, given the sentence “President Obama who flew to Russia called the vice president” and the target predicate called, an annotator can intuitively provide the following QA pairs: (1) Who called? President Obama and (2) Whom did someone call? the vice president. In order to assess the validity of their annotation scheme, He et al. annotated a large sample of the PropBank corpus (1241 sentences) with QA-SRL, and showed high agreement with PropBank over this sample. In the following sections we make use of these explicit role questions for annotating non-restrictive modifiers. 3 Annotation Methodology As mentioned in the Introduction, the first goal of this work is to assemble a large and consistent corpus, annotated with non-restrictive modifications. In this section, we present a crowdsourcing methodology which allows us to generate such corpus in a cost-effective manner (Section 3.2). As a preliminary step, we conducted a smaller scale 4Currently consisting of automatically annotated verbs. 1258 expert annotation (Section 3.1), which will serve as a gold standard with which to test the crowsdsourced annotations. 3.1 Expert Annotation Two researchers, with linguistics and NLP education, were presented with a sample of 219 modifiers of NPs in 100 sentences,5 and were asked to annotate each modifier as either restrictive or non-restrictive, according to the linguistic definition presented in Section 2. Prior to annotating the expert dataset, the annotators discussed the process and resolved conflicts on a development set of 20 modifiers. The annotators agreement was found to be high, reaching agreement on 93.5% of the instances, and κ of 84.2% . An analysis of the few disagreements found that the deviations between the annotators stem from semantic ambiguities, where two legitimate readings of the sentence led to disagreeing annotations. For example, in “sympathetic fans have sent Ms. Shere copies of her recipes clipped from magazines over the years”, one annotator read the underlined modifier clause as restrictive, identifying particular recipes, while the second annotator read the modifier as non-restrictive, adding supplementary information on the sent recipes. Finally, we compose the expert annotation dataset from the 207 modifiers agreed upon by both annotators. In the next section we use this dataset to evaluate the quality of our crowdsourced annotations. 3.2 Crowdsourcing Annotation Process In our scheme, each annotation instance assigns a binary label (restrictive or non-restrictive) to a 4tuple (s, v, p, m) – where m is a modifier of the noun phrase p, which is an argument of a verbal predicate v, in a sentence s. We incorporate v in our scheme in order to provide non-trained annotators with an argument role question (discussed in 2.3), as elaborated below.6 Consider, for example, the sentence s – “the speaker thanked [President Obama who just entered the room]”. We want to annotate the restrictiveness value of the relative clause m (underlined), which modifies the matrix noun phrase p 5These were taken at random from the development partition of the corpus described in Section 4. 6Our annotation currently covers the most common case of NPs which serve as arguments of verbal predicates. (bracketed), which is in turn an argument of a governing predicate v (in bold). Our annotation procedure does not require the annotator to be familiar with the formal linguistic definition of restrictiveness. Instead, we use binary question-answering (true / false questions) as an intuitive formulation of non-restrictive modification. We present annotators with the argument role question pertaining to the argument NP, and ask whether this NP without the modifier gives the same answer to the argument role question as the original NP did. In our example, an annotator is presented with the argument role question “whom did someone thank?” (which is answered by p), and is asked to decide whether the reduced NP, “President Obama”, provides the same answer to the question as the full NP does. If the answer is positive (as in this case), we consider the modifier to be non-restrictive, otherwise we consider it to be restrictive. As an example for the restrictive case, consider “she wore [the necklace that her mother gave her]”, and the respective argument role-question “what did someone wear?”. In this case, as opposed to the previous example, the reduced NP (“the necklace”) does not refer to the same entity as the original NP, since we lose the specific identity of the necklace which was worn. The intuition for this process arises from the linguistic definition for modifier restrictiveness. Namely, a restrictive modifier is defined as an integral part of the NP, and a non-restrictive modifier as providing supplementary or additional information about it. Therefore, in the restrictive case, omitting the modifier would necessarily change the meaning of the answer, while in the nonrestrictive case, omitting it would not change the entity referenced by the full NP, and would therefore provide the same answer to the argument role question. 4 Corpus In this section we describe the creation of a consistent human-annotated restrictiveness corpus, using the annotation process described in the previous section. We show this corpus to be of high quality by comparing it with the independent expert annotation. In Section 5 we use this corpus to train and test several automatic classifiers. 1259 Modifier Type Identified By # Non-Restrictive Agreement κ % Adjectival pos = JJ 684 41.36% 74.70 87.36 Prepositional pos = IN / TMP / LOC 693 36.22% 61.65 85.10 Appositive rel = APPO / PRN 342 73.68% 60.29 80.00 Non-Finite rel = TO 279 68.82% 71.04 86.48 Verbal pos = VB and not relative clause 150 69.33% 100 100 Relative clause pos = VB and child pos = WP 43 79.07% 100 100 Total 2191 51.12% 73.79 87.00 Table 1: Corpus statistics by modifier types, which were identified by part of speech (pos) and dependency label (rel) (Section 4.1). The number of instances (#) and non-restrictiveness percentage refer to the full crowdsourced annotation. Agreement (Cohen’s κ and percent of matching instances) is reported for the expert-annotated data (Section 4.2), between the expert and crowdsourced annotations. 4.1 Data Collection We use the dataset which He et al. (2015) annotated with Question-Answer pairs (discussed in Section 2.3), and keep their train / dev / test split into 744 / 249 / 248 sentences, respectively. This conveniently allows us to link between argument NPs and their corresponding argument role question needed for our annotation process, as described in previous section. This dataset is composed of 1241 sentences from the CoNLL 2009 English dataset (Hajiˇc et al., 2009), which consists of newswire text annotated by the Penn TreeBank (Marcus et al., 1993), PropBank (Palmer et al., 2005), and NomBank (Meyers et al., 2004), and converted into dependency grammar by (Johansson and Nugues, 2008). As mentioned in Section 3.2, each of our annotation instances is composed of a sentence s, a verbal predicate v, a noun phrase p, and a modifier m. We extract each such possible tuple from the set of sentences in the following automatic manner: 1. Identify a verb v in the gold dependency tree. 2. Follow its outgoing dependency arcs to a noun phrase argument p (a dependent of v with a nominal part of speech). 3. Find m, a modifying clause of p which might be non-restrictive, according to the rules described in Table 1, under the “Identified By” column. This filters out modifiers which are always restrictive, such as determiners or genitives, following (Huddleston et al., 2002), as discussed in Section 2. Notice that this automatic modifier detection decouples the span annotation from the restrictiveness annotation, which was a source for inconsistencies in Dornescu et al’s annotation (Section 2.2). This automatic process yields a dataset of 2191 modifiers of 1930 NPs in 1241 sentences. We note that our collection process ensures that the corpus correlates with the syntactic dependency annotation of the CoNLL 2009 shared task, and can therefore be seen as an augmentation of its modifier labels to include restrictiveness annotations. In order to find the corresponding argument role question, we follow the process carried by He et al.; An argument NP is matched to an annotated Question-Answer pair if the NP head is within the annotated answer span. Following this matching process yields a match for 1840 of the NPs. For the remaining 90 NPs we manually compose an argument role question by looking at the governing predicate and its argument NP. For example, given the sentence “[The son of an immigrant stonemason of Slovenian descent] was raised in a small borough outside Ebensburg”, the predicate raised and the bracketed NP argument, we produce the argument role question “Who was raised?”. The corpus category distribution is depicted in Table 1, under column labeled “#”. In later sections we report agreement and performance across these categories to produce finer grained analyses. 4.2 Crowdsourcing Annotation We use Amazon Mechanical Turk7 to annotate the 2191 modifiers for restrictiveness, according to the 7https://www.mturk.com 1260 process defined in Section 3.2. Each modifier was given to 5 annotators, and the final tag was assigned by majority vote. We used the development set to refine the guidelines, task presentation, and the number of annotators. Each annotator was paid 5c for the annotation of an NP, which in average provided 1.16 modifiers. This sets the average price for obtaining a single modifier annotation at 5 · 5 1.16 = 21.5c. The agreement with the 217 NP modifiers annotated by the experts (Section 3.1) and percentage of positive (non-restrictive) examples per category can be found in Table 1, in the columns labeled “agreement”. The labels are generally balanced, with 51.12% non-restrictive modifiers in the entire dataset (varying between 36.22% for prepositional modifiers and 79.07% for relative clauses). Overall, the crowdsourced annotation reached good agreement levels with our expert annotation, achieving 73.79 κ score (substantial agreement). The lowest agreement levels were found on prepositional and appositive modifiers (61.65% and 60.29%).8 Indeed, as discussed in Section 2, these are often subtle decisions which rely heavily on context. For example, the following instances were disagreed upon between our expert annotation and the crowdsourced annotation: In “[Charles LaBella , the assistant U.S. attorney prosecuting the Marcos case], did n’t return phone calls seeking comment” (an appositive example), the experts annotated the underlined modifier as non-restrictive, while the crowdsource annotation marked it as restrictive. Inversely, in “The amendment prompted [an ironic protest] from Mr. Thurmond”, the experts annotated the adjectival modifier as restrictive, while the crowdsource annotation tagged it as non-restrictive. 5 Predicting Non-Restrictive Modification In this section we present an automatic system which: (1) Identifies NP modifiers in a dependency parser’s output (as shown in Table 1, column “Identified By”) and (2) Uses a CRF model to classify each modifier as either restrictive or nonrestrictive, based on the features listed in Table 2, 8While linguistic literature generally regards appositives as non-restrictive, some of the appositions marked in the dependency conversion are in fact misclassified coordinations, which explains why some of them were marked as restrictive. and elaborated below.9 5.1 Baselines We begin by replicating the algorithms in the two prior works discussed in Section 2.2. This allows us to test their performance consistently against our new human annotated dataset. Replicating (Honnibal et al., 2010) They annotated a modifier as restrictive if and only if it was preceded with a comma. We re-implement this baseline and classify all of the modifiers in the test set according to this simple property. Replicating (Dornescu et al., 2014) Their best performing ML-based algorithm10 uses the supervised CRFsuite classifier (Okazaki, 2007) over “standard features used in chunking, such as word form, lemma and part of speech tags”. Replicating their baseline, we extract the list of features detailed in Table 2 (in the row labeled “chunking features”). 5.2 Our Classifier In addition to Dornescu et al.’s generic chunking framework, we also extract features which were identified in the linguistic literature as indicative for non-restrictive modifications. These features are then used in the CRFsuite classifier (the same CRF classifier used by Donescu et al.) to make the binary decision. The following paragraphs elaborate on the motivation for each of the features. Enclosing commas We extend Honnibal’s et al.’s classification method as a binary feature which marks whether the clause is both preceded and terminated with a comma. This follows from a well-known writing style and grammar rule which indicates that non-restrictive clausal modifiers should be enclosed with a comma. Governing relative In the linguistic literature, it was posited that the word introducing a clausal modifier (termed relative) is an indication for the restrictiveness of the subordinate clause. For example, Huddleston. et al. (2002) [p. 1059] analyzes the word “that” as generally introducing a restrictive modifier, while a wh-pronoun is 9We use our new crowdsourced corpus to train our model as well as the baseline model. 10They also implement a rule-based method, named DAPR, which, when combined with the described ML approach surpasses their ML algorithm by ∼1.5% increase in F1. We could not find a publicly available implementation of this method. 1261 more likely to introduce non-restrictive modification. We therefore extract features of the word which governs the relative, such as the surface form, its lemma, POS tag, and more. The full list is shown under “Governing relative” in Table 2. Named entities As illustrated throughout the paper, modifiers of named entities tend to be nonrestrictive. We run the Stanford Named Entity Recognizer (NER) (Finkel et al., 2005) and introduce a feature indicating the type of named entity (PERSON, ORG or LOC), where applicable. Lexical word embeddings We include the pretrained word embeddings of the modifier’s head word, calculated by (Mikolov et al., 2013). These distributional features help the classifier associate between similar words (for example, if “good” is non-restrictive in some contexts, it is likely that “fine” is also non-restrictive within similar contexts). Modifier type We add the automatically identified modifier type as a feature, to associate certain features as indicative for certain types of modifiers (e.g., enclosing commas might be good indicators for relative clause, while word embeddings can be specifically helpful for adjectival modifiers). 6 Evaluation We use the QA-SRL test section (containing 412 NP modifiers) to evaluate each of the systems described in Section 5 on gold and predicted trees, both provided in the CoNLL 2009 dataset (the predicted dependency relations were obtained using MaltParser (Nivre et al., 2007)). The gold setting allows us to test the performance of the systems without accumulating parser errors. In addition, it allows us to partition and analyze our dataset according to the gold modifier type. The predicted setting, on the other hand, allows us to evaluate our classifier in a real-world application scenario, given automatic parsing output. 6.1 Gold Trees The results for each of the systems across our categories on the gold trees are shown in Table 3. Note that we regard non-restrictive modification as positive examples, and restrictive modification as negative examples. This is in line with the applicative goal of reducing argument span by removing nonrestrictive modifiers, discussed in the Introduction. Switching the labels does not significantly change System Feature Type Description Honnibal et al. Preceding comma w[-1] == , Chunking features feats[head-1] (Dornescu et al.) feats[head] feats[head+1] This paper Enclosing commas true iff the clause is preceded and terminated with commas feats[parent-1] Governing relative feats[parent] feats[parent+1] feats[pobj-1] Prepositions feats[pobj] feats[pobj+1] NER PERSON, ORGANIZATION, LOCATION Lexical word embeddings Mikolov et al’s 300-dimensional continuous word embeddings Modifier type one of the types described in Table 1 Table 2: Features used for classification in each of the systems as described in Section 5. head head of the modifier in the dependency tree. parent - parent of head in the dependency tree. pobj - object of the preposition, in case of prepositional head. feats[i] refers to extracting the following features from the word i: POS tag, lemma, is title, is all lower case, is all upper case, is beginning / end of sentence. the numbers, since the corpus is relatively well balanced between the two labels (as can be seen in Table 1). Following are several observations based on an error analysis of these results. Prepositional and adjectival modifiers are harder to predict All systems had more difficulties in classifying both of these categories. This reiterates the relatively lower agreement for these categories between the crowdsource and expert annotation, discussed in Section 4.2. For clausal modifiers, preceding commas are good in precision but poor for recall As can be seen in Honnibal et al.’s columns, a preceding comma is a good indication for a non-restrictive clausal modifier (all categories excluding adjectival or verbal modifiers), but classifying solely by its existence misses many of the non-restrictive instances. 1262 Modifier Type # Precision Recall F1 Honnibal Dornescu Our Honnibal Dornescu Our Honnibal Dornescu Our Prepositional 135 .83 .67 .69 .1 .16 .41 .18 .26 .51 Adjectival 111 .33 .38 .59 .06 .06 .21 .11 .11 .31 Appositive 78 .77 .81 .82 .34 .93 .98 .47 .87 .89 Non-Finite 55 .77 .63 .64 .29 .97 .97 .42 .76 .77 Verbal 20 0 .75 .75 0 1 1 0 .86 .86 Relative clause 13 1 .85 .85 .27 1 1 .43 .92 .92 Total 412 .72 .72 .73 .19 .58 .68 .3 .64 .72 Table 3: Test set performance of the 3 different systems described in Sections 5 and 6 on gold trees from the CoNLL 2009 dataset, across the different categories defined in Section 4. Features P R F1 All .73 .68 .72 Baseline - comma .72 .68 .70 - chunking .72 .66 .69 New - governing relative .74 .61 .67 - prepositions .73 .67 .70 - word embeddings .72 .69 .71 - NER .71 .68 .70 - mod type .74 .66 .70 Table 4: Feature ablation tests on gold trees. Each row specifies a different feature set – “All” specifies the entire feature set from Table 2, while each subsequent line removes one type of features. (Dornescu et al., 2014) performs better on our dataset Their method achieves much better results on our dataset (compare 64% overall F1 on our dataset with their reported 45.29% F1 on their dataset). This speaks both for their method as a valid signal for restrictiveness annotation, as well as for the improved consistency of our dataset. Our system improves recall Overall, our system significantly outperforms both baselines by more than 8% gain in F1 score. Specifically, the numbers show clearly that we improve recall in the frequent categories of prepositional and adjectival modifiers. Furthermore, the results of an ablation test on our features (shown in Table 4) show that chunking and governing relative features provide the highest individual impact. 6.2 Predicted Trees To test our classifier in a realistic setting we evaluate its performance on predicted dependency trees. To obtain the candidate modifiers, we use the same extractor presented in previous sections, applied on the predicted trees in the test section of the CoNLL 2009 dataset. We then apply the models System P R F1 Candidate Extraction .91 .93 .92 Honnibal .71 .18 .29 Dornescu .68 .53 .59 Our .69 .63 .66 Table 5: Restrictiveness results (bottom three lines) on predicted trees. The top line (Candidate Extraction) measures the percent of correct modifiers identified in the predicted trees (shared across all of the classifiers). See Section 6.2. trained on the gold train set of the same corpus. For evaluation, we use the gold labels and compute (1) precision – the percent of predicted nonrestrictive modifiers which match a gold nonrestrictive modifier, and (2) recall – the percent of gold non-restrictive modifiers which match a predicted non-restrictive modifier. Note that this metric is strict, conflating both parser errors with our classifier’s errors. The results are shown in Table 5. The first line in the table measures the performance of the modifier extractor module on the predicted trees. A predicted modifier is considered correct if it agrees with a gold modifier on both its syntactic head as well as its span. The modifier extractor module is shared across all classifiers, as discussed in Section 5, and its performance on the predicted trees imposes an upper bound on all the classifiers. Both our and Dornescu’s classifiers drop 5-6 points in F1, keeping the differences observed on the gold trees, while Honnibal et al.’s simple comma-based classifier is less sensitive to parser errors, dropping only one point in F1. This small drop stems from our classifiers largely relying only on the modifier head and its span for feature computation, generally ignoring 1263 parsing errors within the modifier subtree. 7 Conclusions and Future Work We presented an end-to-end framework for restrictiveness annotation, including a novel QA-SRL based crowdsourcing methodology and a first consistent human-annotated corpus. Furthermore, we presented a linguistically motivated classifier, surpassing the previous baseline by 8% gain in F1. Future work can use our annotated corpus to develop classifiers that deal better with prepositional and adjectival modifiers, which require deeper semantic analysis. Acknowledgments We would like to thank Michael Elhadad and Yoav Goldberg for fruitful discussions, and the anonymous reviewers for their helpful comments. This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS), the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). References Gabor Angeli, Melvin Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015). Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of ACL, pages 86–90. Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. In Proceedings of CONLL, pages 152–164. Marie-Catherine de Marneffe and Christopher D Manning. 2008. The stanford typed dependencies representation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation, pages 1–8. Iustin Dornescu, Richard Evans, and Constantin Orasan. 2014. Relative clause extraction for syntactic simplification. COLING 2014, page 1. Nigel Fabb. 1990. The difference between english restrictive and nonrestrictive relative clauses. Journal of linguistics, 26(01):57–77. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 363–370. Association for Computational Linguistics. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd international conference on computational linguistics, pages 340–348. Association for Computational Linguistics. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, et al. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–18. Association for Computational Linguistics. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In the Conference on Empirical Methods in Natural Language Processing (EMNLP). Julia Hockenmaier and Mark Steedman. 2007. Ccgbank: A corpus of ccg derivations and dependency structures extracted from the penn treebank. In Computational Linguistics. Matthew Honnibal, James R Curran, and Johan Bos. 2010. Rebanking ccgbank for improved np interpretation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 207–215. Association for Computational Linguistics. Rodney Huddleston, Geoffrey K Pullum, et al. 2002. The cambridge grammar of english. Language. Cambridge: Cambridge University Press. Richard Johansson and Pierre Nugues. 2008. Dependency-based syntactic-semantic analysis with propbank and nombank. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 183–187. Association for Computational Linguistics. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91–107. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. 1264 Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The nombank project: An interim report. In HLT-NAACL 2004 workshop: Frontiers in corpus annotation, pages 24–31. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. Maltparser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(02):95–135. Naoaki Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (crfs). Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71–106. Gabriel Stanovsky, Ido Dagan, and Mausam. 2015. Open IE as an intermediate structure for semantic tasks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015). Carla Umbach. 2006. Non-restrictive modification and backgrounding. In Proceedings of the Ninth Symposium on Logic and Language, pages 152–159. 1265
2016
119
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 118–129, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Language Transfer Learning for Supervised Lexical Substitution Gerold Hintz and Chris Biemann Research Training Group AIPHES / FG Language Technology Computer Science Department, Technische Universität Darmstadt {hintz,biem}@lt.informatik.tu-darmstadt.de Abstract We propose a framework for lexical substitution that is able to perform transfer learning across languages. Datasets for this task are available in at least three languages (English, Italian, and German). Previous work has addressed each of these tasks in isolation. In contrast, we regard the union of three shared tasks as a combined multilingual dataset. We show that a supervised system can be trained effectively, even if training and evaluation data are from different languages. Successful transfer learning between languages suggests that the learned model is in fact independent of the underlying language. We combine state-of-the-art unsupervised features obtained from syntactic word embeddings and distributional thesauri in a supervised delexicalized ranking system. Our system improves over state of the art in the full lexical substitution task in all three languages. 1 Introduction The lexical substitution task is defined as replacing a target word in a sentence context with a synonym, which does not alter the meaning of the utterance. Although this appears easy to humans, automatically performing such a substitution is challenging, as it implicitly addresses the problem of both determining semantically similar substitutes, as well as resolving the ambiguity of polysemous words. In fact, lexical substitution was originally conceived as an extrinsic evaluation of Word Sense Disambiguation (WSD) when first proposed by McCarthy & Navigli (2007). However, a system capable of replacing words by appropriate meaning-preserving substitutes can be utilized in downstream tasks that require paraphrasing of input text. Examples of such use cases include text simplification, text shortening, and summarization. Furthermore, lexical substitution can be regarded as an alternative to WSD in downstream tasks requiring word disambiguation. For example, it was successfully applied in Semantic Textual Similarity (Bär et al., 2012). A given list of substitution words can be regarded as a vector representation modeling the meaning of a word in context. As opposed to WSD systems, this is not reliant on a predefined sense inventory, and therefore does not have to deal with issues of coverage, or sense granularity. On the other hand, performing lexical substitution is more complex than WSD, as a system has to both generate and rank a list of substitution candidates per instance. Over the last decade, a number of shared tasks in lexical substitution has been organized and a wide range of methods have been proposed. Although many approaches are in fact languageindependent, most existing work is tailored to a single language and dataset. In this work, we investigate lexical substitution as a multilingual task, and report experimental results for English, German and Italian datasets. We consider a supervised approach to lexical substitution, which casts the task as a ranking problem (Szarvas et al., 2013b). We adapt state-of-the-art unsupervised features (Biemann and Riedl, 2013; Melamud et al., 2015a) in a delexicalized ranking framework and perform transfer learning experiments by training a ranker model from a different language. Finally, we demonstrate the utility of aggregating data from different languages and train our model on this single multilingual dataset. We are able to improve the state of the art for the full task on all datasets. The remainder of this paper is structured as follows. In Section 2 we elaborate on the lexical sub118 stitution task and datasets. Section 3 shows related work of systems addressing each of these tasks. In Section 4 we describe our method for building a supervised system capable of transfer learning. Section 5 shows our experimental results and discussion. Finally in Section 6 we give a conclusion and outlook to future work. 2 Lexical substitution datasets and evaluation The lexical substitution task was first defined at SemEval 2007 (McCarthy and Navigli, 2007, "SE07"). A lexical sample of target word is selected from different word classes (nouns, verbs, and adjectives). Through annotation, a set of valid substitutes was collected for 10-20 contexts per target. Whereas in the original SE07 task, annotators were free to provide “up to three, but all equally good” substitutes, later tasks dropped this restriction. Substitutes were subsequently aggregated by annotator frequency, creating a ranking of substitutes. The use of SE07 has become a de-facto standard for system comparison, however equivalent datasets have been produced for other languages. Evalita 2009 posed a lexical substitution task for Italian (Toral, 2009, "EL09"). Participants were free to obtain a list of substitution candidates in any way, most commonly Italian WordNet1 was used. A WeightedSense baseline provided by the organizers proved very strong, as all systems scored below it. This baseline is obtained by aggregating differently weighted semantic relations from multiple human-created lexical resources (Ruimy et al., 2002). A German version of the lexical substitution task was organized at GermEval 2015 (Cholakov et al., 2014; Miller et al., 2015, "GE15"). Likewise, WeightedSense was able to beat both of two participating systems in oot evaluations (Miller et al., 2015). A variation for cross-lingual lexical substitution was proposed by Mihalcea et al. (2010), in which substitute words are required in a different language than the source sentence. The sentence context as well as the target word were given in English, whereas the substitute words should be provided in Spanish (annotators were fluent in both languages). This variant is motivated by direct application in Machine Translation systems, or as an aid for human-based translation. There also ex1Italian WordNet has later been migrated into MultiWordNet (MWN), which is used in this work. ists a larger crowd-sourced dataset of 1012 nouns (Biemann, 2013, "TWSI"), as well as an all-words dataset in which all words in each sentence are annotated with lexical expansions (Kremer et al., 2014). Evaluation of lexical substitution adheres to metrics defined by SE07 (McCarthy and Navigli, 2007), who provide two evaluation settings2; best evaluating only a system’s “best guess” of a single target substitute and oot, an unordered evaluation of up to ten substitutes. Thater et. al (2009) proposed to use Generalized Average Precision (GAP), to compare an output ranking rather than unordered sets of substitutes. Dataset comparison The proposed lexical substitution datasets (SE07, EL09, GE15) differ in their degree of ambiguity of target items. If a dataset contains mostly target words that are unambiguous, substitution lists of different instances of the same target are similar, despite occurring in different context. We can quantify this degree of variation by measuring the overlap of gold substitutes of each target across all contexts. For this, we adapt the pairwise agreement (PA) metric defined by McCarthy & Navigli (2009). Instead of inter-annotator agreement we measure agreement across different context instances. Let T be a set of lexical target words, and D dataset of instances (ti,Si) ∈D, in which target ti ∈T is annotated with a set of substitutes Si. Then we regard for each target word t the substitute sets St ⊂D for t. We define a substitute agreement as SA(t) as the mean pairwise dice coefficient between all s1,s2 ∈St where s1 ̸= s2. For each dataset D we list the substitute variance SV = 1 −1 |T| ∑t∈T SA(t). Table 1 shows this metric for the three datasets, as well as for subsets of the dataset according to target part of speech. It can be seen that the variance in gold substitutes differs substantially between datasets, but not much between target word type within a dataset. EL09 has the highest degree of variance, suggesting that targets tend to be more ambiguous, whereas GE15 has the lowest degree of variance, suggesting less ambiguity. 3 Related Work Lexical substitution has been addressed extensively in recent years. Early systems, having only very few training instances available, use un2The original SE07 task had a third evaluation setting MWE, in which systems had to correctly identify which target words were part of a multiword expression. 119 dataset substitute variance (SV) noun verb adj adv all SemEval-2007 0.78 0.79 0.72 0.66 0.75 Evalita-2009 0.84 0.82 0.83 0.82 0.83 GermEval-2015 0.59 0.67 0.60 0.66 all 0.75 0.72 0.73 0.69 0.73 Table 1: Degree of variation in gold answers supervised approaches for determining appropriate substitutes. For the English SE07 task, systems mostly consider substitution candidates from WordNet (Fellbaum, 1998) and cast lexical substitution into a ranking task. Experiments may also be performed by pooling the set of candidates from the gold data, evaluating a pure ranking variant. Early approaches use a contextualized word instance representation and rank candidates according to their similarity to this representation. Effective representations are syntactic vector space models (Erk and Padó, 2008; Thater et al., 2011), which use distributional sparse vector representations based on the syntactic context of words. Performance improvement could be shown for different models, including the use of graph centrality algorithms on directional word similarity graphs (Sinha and Mihalcea, 2011), and clustering approaches on word instance representations (Erk and Padó, 2010). Multiple systems have built upon the distributional approach. Extensions include the use of LDA topic models (Ó Séaghdha and Korhonen, 2014), and probabilistic graphical models (Moon and Erk, 2013). The current state of the art combines a distributional model with the use of n-gram language models (Melamud et al., 2015a). They define the context vector of each word in a background corpus as a substitute vector, which is a vector of suitable filler words for the current n-gram context. They then obtain a contextualized paraphrase vector by computing a weighted average of substitute vectors in the background corpus, based on their similarity to the current target instance. In contrast to traditional sparse vector representations obtained through distributional methods, a recent trend is the use of low-dimensional dense vector representations. The use of such vector representations or word embeddings has been popularized by the continuous bag-of-words (CBOW) and Skip-gram model (Mikolov et al., 2013a). Melamud et al. (2015b) show a simple and knowledgelean model for lexical substitution based solely on syntactic word embeddings. As we leverage this model as a feature in our approach, we will elaborate on this in Section 4. Another approach for applying word embeddings to lexical substitution is their direct extension with multiple word senses, which can be weighted according to target context (Neelakantan et al., 2014). Biemann (2013) first showed that the lexical substitution task can be solved very well when sufficient amount of training data is collected per target. An approach based on crowdsourcing human judgments achieved the best performance on the S07 dataset to day. However, judgments had to be collected for each lexical item, and as a consequence the approach can not scale to an open vocabulary. As an alternative to per-word supervised systems trained on target instances per lexeme, allwords systems aim to generalize over all lexical items. Szarvas et al. (2013a) proposed such a system by using delexicalization: features are generalized in such a way that they are independent of lexical items, and thus generalize beyond the training set and across targets. Originally, a maximum entropy classifier was trained on target-substitute instances and used for pointwise ranking of substitution candidates. In a follow-up work it was shown that learning-to-rank methods could drastically improve this approach, achieving state-ofthe-art performance with a LambdaMART ranker (Szarvas et al., 2013b). In this work we will build upon this model and further generalize not only across lexical items but across different languages. For both EL09 and GE15, existing approaches have been adapted. For the Italian dataset, a distributional method was combined with LSA (De Cao and Basili, 2009). The best performing system applied a WSD system and language models (Basile and Semeraro, 2009). For the German dataset, Hintz and Biemann (2015) adapted the supervised approach by (Szarvas et al., 2013a), achieving best performance for nouns and adjectives. Jackov (2015) used a deep semantic analysis framework employing an internal dependency relation knowledge base, which achieved the best performance for verbs. 4 Method description We subdivide lexical substitution into two subtasks; candidate selection and ranking. For a given target t, we consider a list of possible substi120 tutes s ∈Ct, where Ct is a static per-target candidate list. Our method is agnostic to the creation of this static resource, which can be obtained either by an unsupervised similarity-based approach, or from a lexical resource. In particular, candidates obtained at this stage do not disambiguate possible multiple senses of t, and are filtered and ordered in the ranking stage by a supervised model. In modeling a supervised system, we have experimented with two learning setups. The first is applying a standard classification / regression learner. Here, lexical substitution is cast into a pointwise ranking task by training on targetsubstitute pairs generated from the gold standard. For each sentence context c, target word t and substitute s, we regard the tuple (c,t,s) as a training instance. We obtain these training instances for each lexsub instance (c,t) by considering all substitutes s ∈Gt ∪Ct where Gt are all candidates for target t pooled from the gold data and Ct are obtained from lexical resources. We then experiment with two labeling alternatives for a binary classification and a regression setup, respectively. For binary classification we label each instance (c,t,s) as positive if s has been suggested as a substitute for t by at least one annotator, and as negative otherwise. For regression, we normalize the annotation counts for each substitute to obtain a score in (0,1] if a substitute s occurs in the gold data, 0 otherwise. The ranking of substitutes per target is obtained by considering the posterior likelihood of the positive label as yielded by a classifier model. We have tried multiple classifiers but have found no significant improvement over a maximum entropy baseline3. Our second setup is a learningto-rank framework, adapted from (Szarvas et al., 2013b). Here, we are not restricted to a pointwise ranking model, but consider pairwise and listwise models4. We base our feature model on existing research. In addition to basic syntactic and frequency-based features, we obtained sophisticated features from trigram and syntactic thesauri, motivated by the findings of Biemann and Riedl (2013), as well as syntactic embedding features motivated by Melamud et al. (2015b). 3For classification setup we use Mallet: http:// mallet.cs.umass.edu/ 4For learning-to-rank we use RankLib: http:// mallet.cs.umass.edu/ dataset maximum recall w/ MWE w/o MWE SemEval-2007 0.459 0.404 Evalita-2009 0.369 0.337 GermEval-2015 0.192 0.178 all 0.242 0.223 Table 2: Upper bound for substitute recall based on lexical resources WordNet, MultiWordNet, GermaNet 4.1 Candidate selection We confirm earlier research (Sinha and Mihalcea, 2009) on the high quality of selecting candidates from lexical resources. We thus base our candidate selection on prevalently used resources: WordNet (Fellbaum, 1998) for English, GermaNet (Hamp and Feldweg, 1997) for German and MultiWordNet (Pianta et al., 2002) for Italian. For all resources, we consider all possible senses for a given target word and obtain all synonyms, hypernyms and hyponyms and their transitive hull. Thus, for the hypernymy and hyponymy relation, we follow the respective edges in the graph collecting all nodes (synsets) along the path. For each synset, we extract all lemmas as substitution candidates. Although restricting candidates incurs a relatively low upper bound on system recall, we still obtain best results using this rather conservative filter. Table 2 shows the upper bound for system recall for each of the datasets, evaluated with and without removing all multiword expressions from both candidate lists and gold data. A higher coverage of WordNet is a plausible explanation for the much higher recall on the English data. 4.2 Supervised ranking Learning-to-rank methods train a supervised model for ranking a list of items by relevance. A basic pointwise approach applies regression techniques to obtain a relevance scores for each item in isolation. More advanced models are based on pairwise preference information for instance pairs, and listwise approaches, which are optimized on a global metric of a given ranking output. An extensive overview of learning-to-rank models can be found in (Burges, 2010). For lexical substitution, LambdaMART (Wu et al., 2010) has been found to be particularly effective. LambdaMART is a listwise method based on gradient boosting of regression trees. Its two main hyperparameters are 121 the number of leaves in each regression tree and the number of iterations and trees. We have not performed extensive tuning of these hyperparameters and used default settings, an ensemble of 1000 trees with 10 leaves. 4.3 Delexicalized features The idea of delexicalization has been proposed, for instance, by Bergsma et al. (2007). They propose to use statistical measures based solely on the frequency of different expansions of the same target term. Their feature set has motivated a large subset of the feature model, which we adapt in this work. The idea of generalizing features for lexical substitution in such a way that they work across lexical items has been shown by Moon and Erk (2013), and made explicit by Szarvas et al. (2013a). Instances are characterized using non-lexical features from heterogeneous evidence. The intuition of this feature model is to exploit redundant signals of substitutability from different sources and methods. In cases where background corpora are required, the following data is used throughout all features: For English, a newspaper corpus compiled from 105 million sentences from the Leipzig Corpora Collection (Richter et al., 2006) and the Gigaword corpus (Parker et al., 2011) was used. For German a 70M sentence newswire corpus (Biemann et al., 2007) was used. For Italian, a subset of 40M sentences of itWac, a large webcrawl, was used (Baroni et al., 2009). Shallow syntactic features We apply a partof-speech tagger trained on universal POS tags (Petrov et al., 2012), which we simplify into the classes noun, verb, adjective and adverb. Using these simplified tags we construct an n-gram sliding window, with n ∈[1..5], of POS around the target. We could also reduce window sizes drastically to n = 1,2 without sacrificing performance. Frequency features We use language models for each of the languages to obtain frequency ratio features. An n-gram sliding window around a target t is used to generate a set of features freq(cl,s,cr) freq(cl,t,cr), where cl and cr are the left and right context words around t. Here, we normalize the frequency of the substitute with the frequency of the n-gram with original target t. As a variant, we further normalize frequencies by the set of all substitutes, to obtain frequencies features freq(cl,s,cr) ∑s′∈Ct freq(cl,s′,cr) where Ct is the set of candidate substitutes for t. In our experiments we used sliding windows of size [1..5]. We obtain 5-gram counts from web1t (Brants and Franz, 2009). Conjunction ratio features Based on the ngram resources above, we further define a conjunctive phrase ratio feature, which measures how often the construct (cl,t,conjunction,s,cr) occurs in a background corpus; i.e. how often t and s cooccur with a conjunction word (“and”, “or”, “,”), within the context of the sentence. As there is a different set of conjunction words for each language, we first aggregate the mean over all conjunction words: conjl,r (t,s) = 1 |CONJ| ∑ con∈CONJ freq(cl,t,con,s,cr) where l and r is the size of the left and right context window, and CONJ is a set of conjunction words per-language5. For left and right context size l = r = 0 this feature also captures a contextindependent conjunction co-occurrence between only t and s. Again, we normalize this feature over the set of all candidates: conjl,r (t,s) ∑s′∈Ct conjl,r (t,s) Distributional features We construct a distributional thesaurus (DT) for each of the languages by following Biemann and Riedl (2013) and obtain first-order word-to-context measures, as well as second-order word-to-word similarity measures. As context features we have experimented with both syntactic dependencies as well as left and right neighboring words, and have found them to perform equivalently. As a salience measure we use Lexicographer’s Mutual Information (Bordag, 2008) and prune the data, keeping only the 1000 most salient features per word. Word similarity is obtained from an overlap count in the pruned context features. We model features for the contextualized distributional similarity between t and s as • percentage of shared context features for the top-k context features of t and s, globally and restricted to sentence context (k =5, 20, 50, 100, 200) 5Conjunction words used are and, or, (comma), for English; und, oder, (comma) for German and e, ed, o, od, (comma) for Italian. 122 • percentage of shared words for the top-k similar words of t and s (k =200) • sum of salience score of context features of s overlapping with the sentence context • binary occurrence of s in top-k similar words of t (k =100, 200) With the exception of the last feature, these measures are scaled to [0,1] over the set of all substitute candidates. Syntactic word embeddings We adapt the unsupervised approach by (Melamud et al., 2015a) as a set of features. We follow (Levy and Goldberg, 2014) to construct dependency-based word embeddings; we obtain syntactic contexts by running a syntactic dependency parser6, and computing word embeddings using dependency edges as context features7. The resulting dense vector representations for words and context live within the same vector space. We compute the semantic similarity between a target and a substitute word from the cosine similarity in the word embedding space, as well as the first-order target-to-context similarity. For a given target word t and substitute s, let Ct be the syntactic context of t and c ∈Ct a single context – i.e. a dependency edge attached to t; let vt, vs be the vector representations of t and s in the word embedding space, and vc the vector representation of c in the context embedding space. Then Sim1 = cos(vs,vc) and Sim2 = cos(vs,vt) are the first-order and second-order substitutability measures considered by Melamud et al. (2015a). In contrast to their approach, we do not just consider an unsupervised combination of these two measures, but instead use both Sim1 and Sim2 as single features. We also use their combinations of a balanced / unbalanced, arithmetic / geometrical mean, to obtain six numeric features in total. Importantly, these features are independent of the underlying embedding vectors and can therefore generalize across arbitrary embeddings between languages. Semantic resource features To generalize across multiple languages we minimize the 6We trained models for Mate (https://code. google.com/p/mate-tools/) based on universal dependencies (http://universaldependencies. org/) 7We used word2vecf (https://bitbucket.org/ yoavgo/word2vecf) for computing syntactic word embeddings complexity of features obtained from semantic resources – which may differ notably in size and structure. From the resources listed in Section 4.1 we extract binary features for the semantic relations synonymy, hypernymy and hyponymy, occurring between t and s. We have also experimented with graded variants for transitive relations, such as encoding n-th level hypernymy, but have not observed any gain from this feature variation. 4.4 Transfer learning Transfer learning is made feasible by a fully lexeme-independent and language-independent feature space. Language-specific knowledge resides only within the respective resources for each language, and gets abstracted in feature extraction. Figure 1 illustrates this process at the example of two entirely unrelated sentences in different languages (English and German). A further mediator for transfer learning is a model based on boosted decision trees. As opposed to linear models, which could not be reasonably learned across languages, a LambdaMART ranker is able to learn feature interaction across languages. To give an example of what the resulting model can pick up on, we can regard conditionally strong features. Consider the n-gram pair frequency ratio feature of window size (l,r) = (1,0), which compares the frequency ratio of the target and substitute including a single left context word. Depending on the POS window, this feature can be highly informative in some cases, where it is less informative in others. For adjective-noun pairs, in which the noun is the substitution target, the model can learn that this frequency ratio is strongly positively correlated; in this case, the substitute frequently occurs with the same adjective than the original target. For other POS windows, for example determiner-noun pairs, the same frequency ratio may be less indicative, as most nouns frequently occur with a determiner. This property works across languages, as long as as attributive adjectives are prepositioned. In our subset of languages, this is the case for English and German, but not for Italian, which uses postpositive adjectives. Nevertheless, we are able to learn such universal feature interactions. 5 Results and discussion Evaluation of lexical substitution requires special care, as different evaluation settings are used 123 Figure 1: Visualization of feature extraction and delexicalization. Two unrelated sentences in English and German (translation: “the strain has to be limited”) are shown. Language-specific knowledge is obtained from resources for each language respectively. The resulting feature space is delexicalized and language independent. throughout previous work and comparability is not always guaranteed. We follow the convention of reporting the full lexical substitution task (both generating and ranking candidates) with the metrics P-best and P-oot and report the ranking-only task (candidates pooled from the gold standard) with the GAP score. We further observe that previous work commonly discards multiword expressions from both the candidate lists as well as the gold data8. We follow this convention, but note that our system is in fact capable of successfully ranking multiword expansions out of the box. System performance slightly decreases when including MWE, as there is virtually no overlap between those provided by the system and those in the gold standard. For ranking we experiment with different pointwise classifiers as provided by Mallet (MaxEnt classification and regression) as well learning-torank models provided by RankLib (RankBoost, RankNet, LambdaMART). In line with findings in (Szarvas et al., 2013b), we observe that learningto-rank approaches work better than a pointwise classification / regression setup throughout all languages and feature subsets. Among different rankers, we confirm LambdaMART to yield the best performance, and will only report numbers using this model. As optimization metric we have explored both NDCG@10 and MAP. The NDCG metric can incorporate different scoring weights 8The omission of MWE by multiple authors has been confirmed by the authors of (Melamud et al., 2015a). Open evaluation (best-P / oot-P) Training English German Italian English 16.63 48.16 7.43 26.79 8.57 31.94 German 13.20 44.61 11.97 38.45 7.05 28.75 Italian 13.91 39.72 4.25 22.66 15.19 40.37 others 17.19 46.79 8.15 27.33 10.04 30.82 all 17.23 48.83 12.94 41.32 16.15 41.29 SOA9 15.94 36.37 11.20 20.14 10.86 41.46 Table 3: Transfer learning results for the open candidate task (candidates from lexical resources) Ranking evaluation (GAP) Training English German Italian English 51.0 26.9 44.5 German 44.3 56.2 42.9 Italian 36.7 22.2 48.0 others 43.7 26.7 43.9 all 51.9 51.3 50.0 Table 4: Transfer learning results on the rankingonly task (candidates pooled from gold) based on annotator overlap, however MAP directly correlates with evaluation score. We have found optimizing on MAP to yield slightly better results, even if this disregards the relative score weights between gold substitutes. For the rankingonly task, we also extended the pooled training data with additional negative examples (i.e. adding all candidates as for the full task) but observed a minor decrease in system performance. 124 We report transfer learning results across all three datasets. Table 3 shows a transfer-learning matrix for the full lexical substitution task, whereas Table 4 shows results for the ranking-only task. For evaluation, we consistently use the complete datasets, which are roughly equal in size for all languages (~ 2000 instances). For the identity entries in this matrix, as well as training on the complete dataset (“all”) we follow previous supervised work and perform 10-fold cross-validation. Splits are based on the target lexeme, so that no two instances for the same target word are in different sets. Tables 3 and 4 suggest the feasibility of transfer learning. Although models trained on the original language (identity entries of the matrix) perform best, training on a different language still yields reasonable results. Training only on a single other language, not surprisingly, yields worse results for each dataset, however combining the data from the two remaining languages (“others”) can mitigate this issue to some degree. Importantly, adding the data from two additional languages consistently improves system performance throughout all datasets for the open candidate task (Table 3). It is interesting to note that in case of SE07, training on only other languages performs surprisingly well for the best-P score, beating even a model trained on English. A possible explanation for this is that the SE07 dataset appears to be somewhere in the middle between EL09 and GE15 in terms of substitute variance. For the rankingonly task, transfer learning seems to work a little less effectively. In case of German, adding foreign language data in fact hurts GAP performance. This potentially originates from a much smaller set of training instances and inconsistency of the amount and overlap of pooled candidates across different tasks (as described in Table 1). We also observe that a learning-to-rank model is essential for performing transfer learning. In case of LambdaMART, an ensemble of decision trees is constructed, which is well suited to exploit redundant signals across multiple features. Linear models resulted in worse performance for transfer learning, as the resulting weights seem to be language-specific. Feature ablation experiments are performed for various feature groups in the full and ranking-only task (Table 5). The ablation groups correspond to 9State of the art baseline, according to previous reported results, c.f. Table 6 the feature categories defined in Section 4.3. The frequency group includes plain frequency features as well as conjunction ratio features. We consider only our universal model trained on all language data (with 10-fold CV for each dataset). In case of English, the full system performs best and all feature groups improve overall performance. For other languages these results are mixed. In case of the German data, embedding features and semantic relation features seem to work well on their own, so that results for other ablation groups are slightly better. For ranking-only, embedding features seem to be largely subsumed by the combination of the other groups. Ablation of embeddings differs vastly between the full and rankingonly task; they seem to more more crucial for the full task. For all languages, semantic relations are the best feature in the full task, acting as a strong filter for candidates; in ranking-only they are more dispensable. In summary, we observe that delexicalized transfer learning for lexical substitution is possible. Existing supervised approaches can be extended to generalize across multiple languages without much effort. Training a supervised system on different language data emphasizes that the learned model is sufficiently generic to be language independent. Our feature space constructed from heterogeneous evidence consists of many features that perform relatively weakly on their own. The resulting ranking model captures redundancy between these signals. Finally, Table 6 shows our results in comparison to previous work. Note that we omit some participating systems from the original SE07 task. The reason we did not list IRST2 (Giuliano et al., 2007) is that for out-of-ten results, the system outputs the same substitute multiple times and the evaluation scheme gives credit for each copy of the substitute. Our (and other) systems do not tamper with the metric in this way, and only yield a set of substitutes. UNT (Hassan et al., 2007) uses a much richer set of knowledge bases, not all of them easily available, to achieve slightly better oot scores. From our experiments, we list both a model trained per language, as well as a universal model trained on all data. The latter beats nearly all other approaches on the full lexical substitution task, despite not being optimized for a single language. Although omission of MWEs is common practice for SE07, it is unclear if this was 125 English German Italian best-P GAP best-P GAP best-P GAP w/o syntax 15.35 49.5 12.33 42.1 15.70 50.3 w/o frequency 17.04 48.6 13.30 54.6 15.78 51.5 w/o DT 16.88 48.8 12.18 54.6 17.65 51.8 w/o sem. relation 11.51 49.9 6.82 33.9 8.06 49.7 w/o embedding 10.05 51.5 11.51 47.1 7.17 54.4 full system 17.23 51.9 12.94 51.3 16.15 50.0 Table 5: Feature ablation results for the full and ranking-only task (universal model trained on all data) done for EL09 and GE15. However, re-inclusion of MWE does not drastically alter results10. In the ranking-only variant, we are not able to beat the learning-to-rank approach by Szarvas et. al (2013b), we note however that they have performed extensive hyperparameter optimization of their ranker, which we have omitted. We are also not able to achieve GAP scores reported by Melamud at al. (2015b). Although we used their exact embeddings, we could not reproduce their results11. 6 Conclusion We are the first to model lexical substitution as a language-independent task by considering not just a single-language dataset, but by merging data from distinct tasks in English, German and Italian. We have shown that a supervised, delexicalized approach can successfully learn a single model across languages – and thus perform transfer learning for lexical substitution. We observe that a listwise ranker model such as LambdaMART facilitates this transfer learning. We have further shown that incorporating more data helps training a more robust model and can consistently improve system performance by adding foreign language training data. We extended an existing supervised learning-to-rank approach for lexical substitution (Szarvas et al., 2013b) with stateof-the-art embedding features (Melamud et al., 2015b). In our experiments, a single model trained on all data performed best on each language. In all 10For comparison, our scores including MWE for the “all data” model are as follows (best-P, oot-P, GAP). EL09: 15.12, 33.92, 45.8; GE15: 12.20, 41.15, 50.0 11Our evaluation of (Melamud et al., 2015b), balAdd yields a GAP score of 48.8, which is likely related to different evaluation settings. 12baseline by task organizer SemEval ’07 method best-P oot-P GAP (Erk and Padó, 2010) 38.6 (Thater et al., 2011) 51.7 (Szarvas et al., 2013a) 15.94 52.4 (Szarvas et al., 2013b) 55.0 (Melamud et al., 2015b) 08.09 27.65 52.9 (Melamud et al., 2015a) 12.72 36.37 55.2 our method (English only) 16.63 48.16 51.0 our method (all data) 17.23 48.83 51.9 Evalita ’09 method best-P oot-P GAP (Basile and Semeraro, 2009) 08.16 41.46 (Toral, 2009)12 10.86 27.52 our method (Italian only) 15.19 40.37 48.0 our method (all data) 16.15 31.18 50.0 GermEval ’15 method best-P oot-P GAP (Hintz and Biemann, 2015) 11.20 19.49 (Jackov, 2015) 06.73 20.14 our method (German only) 11.97 38.45 56.2 our method (all data) 12.94 41.32 51.3 Table 6: Experimental results of our method compared to related work for all three lexical substitution tasks three datasets we were able to improve the current state of the art for the full lexical substitution task. The resulting model can be regarded as languageindependent; given an unannotated background corpus for computing language-specific resources and a source of substitution candidates, the system can be used almost out of the box. For obtaining substitution candidates, we still rely on lexical resources such as WordNet, which have to be available for each language. As future work we aim to make our approach completely knowledgefree by eliminating this dependency. We can consider substitution candidates based on their distributional similarity. First experiments confirm that this already yields a much better coverage, i.e. upper bound on recall, while introducing more noise. The remaining key challenge is to better characterize possible substitutes from bad substitutes in ranked lists of distributionally similar words, which frequently contain antonyms and cohyponyms. We will explore unsupervised acquisition of relational similarity (Mikolov et al., 2013b) for this task. 126 Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1 and the SEMSCH project, grant No. BI 1544. References Daniel Bär, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. UKP: Computing semantic textual similarity by combining multiple content similarity measures. In Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 435–440, Montreal, Canada. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209–226. Pierpaolo Basile and Giovanni Semeraro. 2009. UNIBA @ EVALITA 2009 lexical substitution task. In Proceedings of EVALITA workshop, 11th Congress of Italian Association for Artificial Intelligence, Reggio Emilia, Italy. Shane Bergsma and Qin Iris Wang. 2007. Learning noun phrase query segmentation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), volume 7, pages 819–826, Prague, Czech Republic. Chris Biemann and Martin Riedl. 2013. Text: Now in 2D! A framework for lexical expansion with contextual similarity. Journal of Language Modelling, 1(1):55–95. Chris Biemann, Gerhard Heyer, Uwe Quasthoff, and Matthias Richter. 2007. The Leipzig corpora collection: monolingual corpora of standard size. In Proceedings of Corpus Linguistics, Birmingham, UK. Chris Biemann. 2013. Creating a system for lexical substitutions from scratch using crowdsourcing. Language Resources and Evaluation, 47(1):97–122. Stefan Bordag. 2008. A Comparison of Co-occurrence and Similarity Measures As Simulations of Context. In Proceedings of the 9th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing 2008, pages 52–63, Haifa, Israel. Thorsten Brants and Alex Franz. 2009. Web 1T 5gram, 10 European languages version 1. Linguistic Data Consortium. Christopher J.C. Burges. 2010. From RankNet to LambdaRank to LambdaMART: An overview. Technical Report MSR-TR-2010-82, Microsoft Research. Kostadin Cholakov, Chris Biemann, Judith EckleKohler, and Iryna Gurevych. 2014. Lexical substitution dataset for German. In Proceedings of the 9th International Conference on Language Resources and Evaluation, pages 1406–1411, Reykjavik, Iceland. Diego De Cao and Roberto Basili. 2009. Combining distributional and paradigmatic information in a lexical substitution task. In Proceedings of EVALITA workshop, 11th Congress of Italian Association for Artificial Intelligence, Reggio Emilia, Italy. Katrin Erk and Sebastian Padó. 2008. A structured vector space model for word meaning in context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 897– 906, Waikiki, HI, USA. Katrin Erk and Sebastian Padó. 2010. Exemplar-based models for word meaning in context. In Proceedings of the ACL 2010 conference short papers, pages 92– 97, Uppsala, Sweden. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. Claudio Giuliano, Alfio Gliozzo, and Carlo Strapparava. 2007. FBK-irst: Lexical substitution task exploiting domain and syntagmatic coherence. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), pages 145– 148, Prague, Czech Republic. Birgit Hamp and Helmut Feldweg. 1997. GermaNet a lexical-semantic net for German. In In Proceedings of ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 9–15, Madrid, Spain. Samer Hassan, Andras Csomai, Carmen Banea, Ravi Sinha, and Rada Mihalcea. 2007. UNT: Subfinder: Combining knowledge sources for automatic lexical substitution. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), pages 410–413, Prague, Czech Republic. Gerold Hintz and Chris Biemann. 2015. Delexicalized supervised German lexical substitution. In Proceedings of GermEval 2015: LexSub, pages 11–16, Essen, Germany. Luchezar Jackov. 2015. Lexical substitution using deep syntactic and semantic analysis. In Proceedings of GermEval 2015: LexSub, pages 17–20, Essen, Germany. 127 Gerhard Kremer, Katrin Erk, Sebastian Padó, and Stefan Thater. 2014. What substitutes tell us - analysis of an "all-words" lexical substitution corpus. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 540–549, Gothenburg, Sweden. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 302–308, Baltimore, MD, USA. Diana McCarthy and Roberto Navigli. 2007. Semeval2007 task 10: English lexical substitution task. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 48–53, Prague, Czech Republic. Diana McCarthy and Roberto Navigli. 2009. The English lexical substitution task. Language resources and evaluation, 43(2):139–159. Oren Melamud, Ido Dagan, and Jacob Goldberger. 2015a. Modeling word meaning in context with substitute vectors. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 472–482, Denver, CO, USA. Oren Melamud, Omer Levy, and Ido Dagan. 2015b. A simple word embedding model for lexical substitution. VSM Workshop. Denver, CO, USA. Rada Mihalcea, Ravi Sinha, and Diana McCarthy. 2010. Semeval-2010 task 2: Cross-lingual lexical substitution. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 9–14, Uppsala, Sweden. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, Lake Tahoe, NV, USA. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, GA, USA. Tristan Miller, Darina Benikova, and Sallam Abualhaija. 2015. GermEval 2015: LexSub – A shared task for German-language lexical substitution. In Proceedings of GermEval 2015: LexSub, pages 1– 9, Essen, Germany. Taesun Moon and Katrin Erk. 2013. An inferencebased model of word meaning in context as a paraphrase distribution. ACM Transactions on Intelligent Systems and Technology (TIST), 4(3):42. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1059–1069, Doha, Qatar. Diarmuid Ó Séaghdha and Anna Korhonen. 2014. Probabilistic distributional semantics with latent variable models. Computational Linguistics, 40(3):587–631. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword Fifth Edition. Technical report, Linguistic Data Consortium, Philadelphia, PA, USA. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eight International Conference on Language Resources and Evaluation, pages 2089–2096, Istanbul, Turkey. Emanuele Pianta, Luisa Bentivogli, and Christian Girardi. 2002. MultiWordNet: developing an aligned multilingual database. In Proceedings of the 1st International WordNet Conference, pages 293–302, Mysore, India. Matthias Richter, Uwe Quasthoff, Erla Hallsteinsdóttir, and Chris Biemann. 2006. Exploiting the Leipzig Corpora Collection. In Proceedings of the Information Society Language Technologies Conference 2006, pages 68–73. Ljubljana, Slovenia. Nilda Ruimy, Monica Monachini, Raffaella Distante, Elisabetta Guazzini, Stefano Molino, Marisa Ulivieri, Nicoletta Calzolari, and Antonio Zampolli. 2002. CLIPS, a multi-level Italian computational lexicon: a glimpse to data. In Proceedings of the 3rd International Conference on Language Resources and Evaluation, pages 792–799, Las Palmas, Spain. Ravi Sinha and Rada Mihalcea. 2009. Combining lexical resources for contextual synonym expansion. In Proceedings of the Conference in Recent Advances in Natural Language Processing, pages 404–410, Borovets, Bulgaria. Ravi Som Sinha and Rada Flavia Mihalcea. 2011. Using centrality algorithms on directed graphs for synonym expansion. In FLAIRS Conference, pages 311–316, Palm Beach, FL, USA. György Szarvas, Chris Biemann, Iryna Gurevych, et al. 2013a. Supervised all-words lexical substitution using delexicalized features. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1131–1141, Atlanta, GA, USA. György Szarvas, Róbert Busa-Fekete, and Eyke Hüllermeier. 2013b. Learning to rank lexical substitutions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1926–1932, Seattle, WA, USA. 128 Stefan Thater, Georgiana Dinu, and Manfred Pinkal. 2009. Ranking paraphrases in context. In Proceedings of the 2009 Workshop on Applied Textual Inference, pages 44–47, Singapore, Republic of Singapore. Stefan Thater, Hagen Fürstenau, and Manfred Pinkal. 2011. Word meaning in context: A simple and effective vector model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1134–1143, Edinburgh, UK. Antonio Toral. 2009. The lexical substitution task at EVALITA 2009. In Proceedings of EVALITA Workshop, 11th Congress of Italian Association for Artificial Intelligence, Reggio Emilia, Italy. Qiang Wu, Christopher JC Burges, Krysta M Svore, and Jianfeng Gao. 2010. Adapting boosting for information retrieval measures. Information Retrieval, 13(3):254–270. 129
2016
12
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1266–1276, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Bilingual Segmented Topic Model Akihiro Tamura and Eiichiro Sumita National Institute of Information and Communications Technology 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, JAPAN {akihiro.tamura, eiichiro.sumita}@nict.go.jp Abstract This study proposes the bilingual segmented topic model (BiSTM), which hierarchically models documents by treating each document as a set of segments, e.g., sections. While previous bilingual topic models, such as bilingual latent Dirichlet allocation (BiLDA) (Mimno et al., 2009; Ni et al., 2009), consider only cross-lingual alignments between entire documents, the proposed model considers cross-lingual alignments between segments in addition to document-level alignments and assigns the same topic distribution to aligned segments. This study also presents a method for simultaneously inferring latent topics and segmentation boundaries, incorporating unsupervised topic segmentation (Du et al., 2013) into BiSTM. Experimental results show that the proposed model significantly outperforms BiLDA in terms of perplexity and demonstrates improved performance in translation pair extraction (up to +0.083 extraction accuracy). 1 Introduction Probabilistic topic models, such as probabilistic latent semantic analysis (PLSA) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei et al., 2003), are generative models for documents that have been used as unsupervised frameworks to discover latent topics in document collections without prior knowledge. These topic models were originally applied to monolingual data; however, various recent studies have proposed the use of probabilistic topic models in multilingual set                                                                        (football) : soccer  1 11 2    ! "#$ %&'() ( *+ciatiall ; ,./+  012 &'()( 34./+ 1  2  (name) 3  (game) 4  (history) 5    Figure 1: Wikipedia Article Example tings1, where latent topics are shared across multiple languages. These models have improved several multilingual tasks, such as translation pair extraction and cross-lingual text classification (see the survey paper by Vuli´c et al. (2015) for details). Most multilingual topic models, including bilingual LDA (BiLDA) (Mimno et al., 2009; Ni et al., 2009), model a document-aligned comparable corpus, such as a collection of Wikipedia articles, where aligned documents are topically similar but are not direct translations2. In particular, these models assume that the documents in each tuple share the same topic distribution and that each cross-lingual topic has a language-specific word distribution. Existing multilingual topic models consider only document-level alignments. However, most documents are hierarchically structured, i.e., a document comprises segments (e.g., sections and paragraphs) that can be aligned across languages. Figure 1 shows a Wikipedia article example, which contains a set of sections. Sections 1, 2, and 3 in the English article correspond topically to sections 4, 2, and 3 in the Japanese counterpart, re1In this work, we deal with a bilingual setting, but our approach can be extended straightforwardly to apply to more than two languages. 2In this study, we focus on models for a document-aligned comparable corpus. We describe other types of multilingual topic models and their limitations in Section 7. 1266 spectively. To date, such segment-level alignments have been ignored; however, we consider that such corresponding segments must share the same topic distribution. Du et al. (2010) have shown that segment-level topics and their dependencies can improve modeling accuracy in a monolingual setting. Based on that research, we expect that segment-level topics can also be useful for modeling multilingual data. This study proposes a bilingual segmented topic model (BiSTM) that extends BiLDA to capture segment-level alignments through a hierarchical structure. In particular, BiSTM considers each document as a set of segments and models a document as a document-segment-word structure. The topic distribution of each segment (per-segment topic distribution) is generated using a Pitman– Yor process (PYP) (Pitman and Yor, 1997), in which the base measure is the topic distribution of the related document (per-document topic distribution). In addition, BiSTM introduces a binary variable that indicates whether two segments in different languages are aligned. If two segments are aligned, their per-segment topic distributions are shared; if they are not aligned, they are independently generated. BiSTM leverages existing segments from a given segmentation. However, a segmentation is not always given, and a given segmentation might not be optimal for statistical modeling. Therefore, this study also presents a model, BiSTM+TS, that incorporates unsupervised topic segmentation into BiSTM. BiSTM+TS integrates point-wise boundary sampling into BiSTM in a manner similar to that proposed by Du et al. (2013) and infers segmentation boundaries and latent topics jointly. Experiments using an English–Japanese and English–French Wikipedia corpus show that the proposed models (BiSTM and BiSTM+TS) significantly outperform the standard bilingual topic model (BiLDA) in terms of perplexity, and that they improve performance in translation extraction (up to +0.083 top 1 accuracy). The experiments also reveal that BiSTM+TS is comparable to BiSTM, which uses manually provided segmentation, i.e., section boundaries in Wikipedia articles. 2 Bilingual LDA This section describes the BiLDA model (Mimno et al., 2009; Ni et al., 2009), which we take as                         Figure 2: Graphical Model of BiLDA Algorithm 1 Generative Process of BiLDA 1: for each topic k ∈{1, ..., K} do 2: for each language l ∈{e, f} do 3: choose ϕl k ∼Dirichlet(βl) 4: end for 5: end for 6: for each document pair di (i ∈{1, ..., D}) do 7: choose θi ∼Dirichlet(α) 8: for each language l ∈{e, f} do 9: for each word wl im (m ∈{1, ..., Nl i}) do 10: choose zl im ∼Multinomial(θi) 11: choose wl im ∼p(wl im|zl im, ϕl) 12: end for 13: end for 14: end for our baseline. BiLDA is a bilingual extension of basic monolingual LDA (Blei et al., 2003) for a document-aligned comparable corpus. While monolingual LDA assumes that each document has its own topic distribution, BiLDA assumes that aligned documents share the same topic distribution and discovers latent cross-lingual topics. Algorithm 1 and Figure 2 show the generative process and graphical model, respectively, of BiLDA. BiLDA models a document-aligned comparable corpus, i.e., a set of D document pairs in two languages, e and f. Each document pair di (i ∈{1, ..., D}) comprises aligned documents in the language e and f: di=(de i, df i ). BiLDA assumes that each topic k ∈{1, ..., K} comprises the set of a discrete distribution over words for each language. Each language-specific per-topic word distribution ϕl k (l ∈{e, f}) is drawn from a Dirichlet distribution with the prior βl (Steps 1-5). To generate a document pair di, the perdocument topic distribution θi is first drawn from a Dirichlet distribution with the prior α (Step 7). Thus, aligned documents de i and df i share the same topic distribution. Then, for each word at m ∈ {1, ..., Nl i} in document dl i in language l, a latent topic assignment zl im is drawn from a multinomial 1267                                Figure 3: Graphical Model of BiSTM Algorithm 2 Generative Process of BiSTM 1: for each topic k ∈{1, ..., K} do 2: for each language l ∈{e, f} do 3: choose ϕl k ∼Dirichlet(βl) 4: end for 5: end for 6: for each document pair di (i ∈{1, ..., D}) do 7: choose θi ∼Dirichlet(α) 8: if yi are not given then 9: choose γi ∼Beta(η0, η1) 10: choose yi ∼Bernoulli(γi) 11: end if 12: generate aligned segment sets ASi = genAS(yi) 13: for each set ASig (g ∈{1, ..., |ASi|}) do 14: choose νig ∼PYP(a, b, θi) 15: end for 16: for each language l ∈{e, f} do 17: for each segment sl ij (j ∈{1, ..., Sl i}) do 18: get index of sl ij in ASi: g =get idx(ASi,sl ij) 19: for each word wl ijm (m ∈{1, ..., Nl ij}) do 20: choose zl ijm ∼Multinomial(νig) 21: choose wl ijm ∼p(wl ijm|zl ijm, ϕl) 22: end for 23: end for 24: end for 25: end for distribution with the prior θi (Step 10). Later, a word wl im is drawn from a probability distribution p(wl im|zl im, ϕl) given the topic zl im (Step 11). 3 Bilingual Segmented Topic Model Here, we describe BiSTM, which extends BiLDA to capture segment-level alignments. Algorithm 2 and Figure 3 show the generative process and graphical model, respectively, of BiSTM. As can be seen in Figure 3, BiSTM introduces a segmentlevel layer between the document- and word-level layers in both languages. In other words, persegment topic distributions for each language, νe and νf, are introduced between per-document topic distributions θ and topic assignments for words, ze and zf. In addition, BiSTM incorporates binary variables y to represent segment-level alignments. Each document dl i in a pair of aligned documents di is divided into Sl i segments: dl i = ∪Sl i j=1 sl ij. BiSTM makes the same assumption for per-topic word distributions as BiLDA, i.e., ϕl k are language-specific and drawn from Dirichlet distributions (Steps 1-5). In the generative process for a document pair di, the per-document topic distribution θi is first drawn in the same way as in BiLDA (Step 7). Thus, in BiSTM, each document pair shares the same topic distribution. Then, if segment-level alignments are not given, yi are generated (Steps 8-11). We assume that each document pair di has a probability γi that indicates comparability between segments across languages. γi is drawn from a Beta distribution with the priors η0 and η1 (Step 9). Then, each of yi is drawn from a Bernoulli distribution with the prior γi (Step 10). Here, yijj′ = 1 if and only if se ij and sf ij′ are aligned; otherwise, yijj′ = 0. Note that if segment-level alignments are observed, then Steps 8-11 are skipped. Later, a set of aligned segment sets ASi is generated based on yi (Step 12). For example, given de i = {se i1, se i2}, df i = {sf i1, sf i2, sf i3}, yi11 and yi12 are 1, and the other y’s are 0, ASi = {ASi1 = {se i1, sf i1, sf i2}, ASi2 = {se i2}, ASi3 = {sf i3}} is generated in Step 12. Then, for each aligned segment set ASig (g ∈ {1, ..., |ASi|}), the per-segment topic distribution νig is obtained from a Pitman–Yor process with the base measure θi, the concentration parameter a, and the discount parameter b (Step 14). Through Steps 12-15, aligned segments indicated by y share the same per-segment topic distribution. For instance, se i1, sf i1, and sf i2 have the same topic distribution νi1 ∼PYP(a, b, θi) in the above example. Then, for each word at m ∈{1, ..., Nl ij} in segment sl ij in document dl i in language l, a latent topic assignment zl ijm is drawn from a multinomial distribution with the prior νig (Step 20), where g denotes the index of the element set of ASi that includes the segment sl ij, e.g., g for sf i2 is 1. Subsequently, a word wl ijm is drawn based on the assigned topic zl ijm and the language-specific per-topic word distribution ϕl in the same manner as in BiLDA (Step 21). 1268 tigk Table count of topic k in the CRP for aligned segment set g in document pair i. tig K-dimensional vector, where k-th value is tigk. tig· Total table count in aligned segment set g in document pair i, i.e., ∑ k tigk. nigk Total number of words with topic k in aligned segment set g in document pair i. nig· Total number of words in aligned segment set g in document pair i, i.e., ∑ k nigk. M l kw Total number of word w with topic k in language l. M l k |W l|-dimensional vector, where w-th value is Ml kw. Table 1: Statistics used in our Inference 3.1 Inference for BiSTM In inference, we find the set of latent variables θ, ν, z, and ϕ that maximizes their posterior probability given the model parameters α, β and observations w, y, i.e., p(θ, ν, z, ϕ|α, β, w, y). Here, a language-dependent variable without a superscript denotes both of the variable in language e and that in f, e.g., z = {ze, zf}. Unfortunately, as in other probabilistic topic models, such as LDA and BiLDA, we cannot compute this posterior using an exact inference method. This section presents an approximation method for BiSTM based on blocked Gibbs sampling, inspired by Du et al. (2013). In our inference, the hierarchy in BiSTM, i.e., the generation of ν and z, is explained by the Chinese restaurant process (CRP), through which the parameters θ, ν, and ϕ are integrated out, and the statistics on table counts in the CRP, t, are introduced. Table 1 lists all statistics used in our inference, where W l denotes a vocabulary set in language l. Moreover, to accelerate convergence, we introduce an auxiliary binary variable δl ijm for wl ijm, indicating whether wl ijm is the first customer on a table (δl ijm = 1) or not (δl ijm = 0), and tigk is computed based on δ in the same manner as in Chen et al. (2011): tigk = ∑ sl ij∈ASig Nl ij ∑ m=1 δl ijmI(zl ijm = k), where I(x) is a function that returns 1 if the condition x is true and 0 otherwise. Our inference groups zl ijm and δl ijm (each group is called a “block”) and jointly samples them. Moreover, if y is not observed, our inference alternates two different kinds of blocks, (zl ijm, δl ijm) and yijj′. In each sampling, individual variables are resampled, conditioned on all other variables. In the following, we describe each sampling stage. Sampling (z, δ): The joint posterior distribution of z, w, and δ is induced in a manner similar to that in Du et al. (2010; 2013): p(z, w, δ|α, β, a, b, y) = D ∏ i=1 ( BetaK(α + ∑ ASi tig) BetaK(α) ∏ ASi ((b|a)tig· (b)nig· K ∏ k=1 S ( nigk, tigk, a )(nigk tigk )−1)) K ∏ k=1 (BetaW e(βe + M e k) BetaW e(βe) BetaW f (βf + M f k ) BetaW f (βf) ) , where BetaK(·) and BetaW l(·) are K- and |W l|dimensional beta functions, respectively, (b|a)n is the Pochhammer symbol3, and (b)n is given by (b|1)n. S(n, m, a) is a generalized Stirling number of the second kind (Hsu and Shiue, 1998), which is given by the linear recursion S(n + 1, m, a) = S(n, m −1, a) + (n −ma)S(n, m, a). To reduce computational cost, the Stirling numbers are preliminarily calculated in a logarithm format (Buntine and Hutter, 2012). Then, the cached values are used in our sampling. The joint conditional distributions of zl ijm and δl ijm are obtained from the above joint distribution using Bayes’ rule: p(zl ijm = k, δl ijm = 1|z−zl ijm, w, δ−δl ijm, α, β, a, b, y) = βl wl ijm + Ml kwl ijm ∑ w∈W l(βlw + Ml kw) αk + ∑ ASi tigk ∑K k=1(αk + ∑ ASi tigk) b + atig′· b + nig′· S(nig′k + 1, tig′k + 1, a) S(nig′k, tig′k, a) tig′k + 1 nig′k + 1, p(zl ijm = k, δl ijm = 0|z−zl ijm, w, δ−δl ijm, α, β, a, b, y) = βl wl ijm + Ml kwl ijm ∑ w∈W l(βlw + Ml kw) 1 b + nig′· S(nig′k + 1, tig′k, a) S(nig′k, tig′k, a) nig′k + 1 −tig′k nig′k + 1 , where sl ij is included in ASig′. Sampling y: In our inference, each aligned segment set corresponds to a restaurant in the CRP. We regard the sampling of yijj′ as the choice of splitting or merging restaurant(s) in a manner similar to that 3(b|a)n = ∏n−1 t=0 (b + ta). 1269 in the sampling of segmentation boundaries in Du et al. (2013). In particular, if yijj′ = 0, then one aligned segment set ASm is split into two aligned segment sets ASl and ASr, where ASl, ASr, and ASm include se ij, sf ij′, and both, respectively. If yijj′ = 1, then ASl and ASr are merged to ASm. For simplicity, our inference specifies ASl and ASr based on the current y as follows: if ASi(se ij) = ASi(sf ij′), then ASl = {se ij} ∪ASf i (se ij) \ {sf ij′} and ASr = {sf ij′} ∪ ASe i (sf ij′)\{se ij}; otherwise, ASl = ASi(se ij) and ASr = ASi(sf ij′). Here, ASi(j) is the element set of ASi that includes the segment j, and ASl i(j) is the set of segments in language l included in ASi(j). For example, in the example in Section 3, ASi(sf i1) = ASi1 = {se i1, sf i1, sf i2}, ASe i (sf i1) = {se i1}, and ASf i (sf i1) = {sf i1, sf i2}. In addition, if yi11 = 0, then ASm = {se i1, sf i1, sf i2} is split into ASl = {se i1} ∪ASf i (se i1) \ {sf i1} = {se i1, sf i2} and ASr = {sf i1} ∪ASe i (sf i1) \ {se i1} = {sf i1}. If yi23 = 1, then ASl = ASi(se i2) = {se i2} and ASr = ASi(sf i3) = {sf i3} are merged to ASm = {se i2, sf i3}. The conditional distributions of yijj′ are as follows: p(yijj′ = 0|y−yijj′, z, w, δ, α, a, b, η0, η1) ∝ η0 + ci0 η0 + η1 + ci0 + ci1 BetaK ( α + ∑ ASi tig ) ∏ g∈{ASl,ASr} (b|a)tig· (b)nig· K ∏ k=1 S(nigk, tigk, a), p(yijj′ = 1|y−yijj′, z, w, t \ T, α, a, b, η0, η1) ∝ ∑ T ( η1 + ci1 η0 + η1 + ci0 + ci1 BetaK ( α + ∑ ASi tig ) (b|a)ti,ASm,· (b)ni,ASm,· K ∏ k=1 S(ni,ASm,k, ti,ASm,k, a) ) , where T is the set of tigk such that for either or both of ASl and ASr, tigk = 1. ci0 and ci1 are the total number of yi’s whose values are 0 and that of yi’s whose values are 1, respectively. Note that we change yi’s that relate to the selected action (merging or splitting), in addition to yijj′ to maintain consistency between y and the aligned segment sets. Inference of θ, ν, ϕ: Although our inference does not directly estimate θ, ν, and ϕ, these variables can be inferred from the following posterior expected values via Algorithm 3 Generative Process for Segments 1: for each document dl i (i ∈{1, ..., D}) do 2: choose πl i ∼Beta(λ0,λ1) 3: for each passage ul ih (h ∈{1, ..., Ul i}) do 4: choose ρl ih ∼Bernoulli(πl i) 5: end for 6: sl i = concatenate(ul i, ρl i) 7: end for sampling: ˆθik = Ezi,ti|wi,α,β,a,b,y [ αk + ∑ ASi tigk ∑K k=1(αk + ∑ ASi tigk) ] , ˆνigk = Ezi,ti|wi,α,β,a,b,y [ nigk −atigk b + nig· + θik atig· + b b + nig· ] , ˆϕl kw = Ez,t|w,α,β,a,b,y [ βl w + M l kw ∑ w′∈W l(βl w′ + Ml kw′) ] . 4 Integration of Topic Segmentation into BiSTM (BiSTM+TS) To infer segmentation boundaries simultaneously with cross-lingual topics, we integrate the unsupervised Bayesian topic segmentation method proposed by Du et al. (2013) into the proposed BiSTM (BiSTM+TS). We assume that each segment is a sequence of topically-related passages. In particular, we consider a sentence as a passage. Our segmentation model defines a segment in document dl i by a boundary indicator variable ρl ih for each passage ul ih (h ∈{1, ..., Ul i}); ρl ih is 1 if there is a boundary after passage ul ih (otherwise 0). For example, ρl i = (0, 1, 0, 0, 1) indicates that the document dl i comprises the two segments {ul i1, ul i2} and {ul i3, ul i4, ul i5}. Algorithm 3 shows the generative process for segments. The generative process of BiSTM+TS inserts Algorithm 3 between Steps 7 and 8 of Algorithm 2. Note that two documents (de i, df i ) ∈ di are segmented independently. BiSTM+TS assumes that each document dl i has its own topic shift probability πl i. For each document dl i, πl i is first drawn from a Beta distribution with the priors λ0 and λ1 (Step 2). Then, for each passage ul ih (h ∈{1, ..., Ul i}), ρl ih is drawn from a Bernoulli distribution with the prior πl i (Step 4). Finally, segments sl i are generated by concatenating passages based on ρl i (Step 6). 1270 4.1 Inference for BiSTM+TS Our inference for BiSTM+TS alternates three different kinds of blocks, sampling of ρ and samplings for BiSTM ((z, δ) and y). The conditional distribution of ρ comprises the Gibbs probability for splitting one segment sm into two segments sr and sl by placing the boundary after ul ih (ρl ih = 1) and that for merging sr and sl to sm by removing the boundary after ul ih (ρl ih = 0). These probabilities are estimated in the same manner as the conditional probabilities of yijj′, where y (yijj′ = 0, 1), ASl, ASr, ASm, η0, and η1 are replaced with ρ (ρl ih = 1, 0), sl, sr, sm, λ1, and λ0, respectively, and the statistics t and n are summed for every segment rather than for every aligned segment set (see Equation (6) and (9) in Du et al. (2013)). Our inference assumes that sampling ρ does not depend on aligned segments in the other language, i.e., y4. After splitting or merging, we set the y’s of sm, sl, and sr as follows: if sm is split into sl and sr, then AS(sl) = AS(sm) and AS(sr) = AS(sm); if sl and sr are merged to sm, then AS(sm) = AS(sl) ∪AS(sr). 5 Experiment We evaluated the proposed models in terms of perplexity and performance in translation pair extraction, which is a well-known application that uses a bilingual topic model. We used a document-aligned comparable corpus comprising 3,995 document pairs, each of which is a Japanese Wikipedia article in the Kyoto Wiki Corpus5 and its corresponding English Wikipedia article6. Note that the English articles were collected from the English Wikipedia database dump (2 June 2015)7 based on inter-language links, even though the original Kyoto Wiki corpus is a parallel corpus, in which each sentence in the Japanese articles is manually translated into English. Thus, our experimental data is not a parallel corpus. We extracted texts from the collected English articles using an open-source script8. All Japanese and 4We leave a bilingual extension of the topic segmentation, i.e., incorporation of y, for future work. 5http://alaginrc.nict.go.jp/ WikiCorpus/index_E.html 6We filtered out the Japanese articles that do not have corresponding English articles. 7http://dumps.wikimedia.org/enwiki/ 8https://github.com/attardi/ wikiextractor/ English texts were segmented using MeCab9 and TreeTagger10 (Schmid, 1994), respectively. Then, function words were removed, and the remaining words were lemmatized to reduce data sparsity. For translation extraction experiments, we automatically created a gold-standard translation set according to Liu et al. (2013). We first computed p(we|wf) and p(wf|we) by running IBM Model 4 on the original Kyoto Wiki corpus, which is a parallel corpus, using GIZA++ (Och and Ney, 2003), and then extracted word pairs ( ˆ we, ˆ wf) that satisfy both of the following conditions: ˆ we = argmaxwep(we|wf = ˆ wf) and ˆ wf = argmaxwf p(wf|we = ˆ we). Finally, we eliminated word pairs that do not appear in the document pairs in the document-aligned comparable corpus. We used all 7,930 Japanese words in the resulting gold-standard set as the evaluation input. 5.1 Competing Methods We compared the proposed models (BiSTM and BiSTM+TS) with a standard bilingual topic model (BiLDA). BiSTM considers each section in Wikipedia articles as a segment. Note that alignments between sections are not given in our experimental data. Thus, y is inferred in both BiSTM and BiSTM+TS. As in the proposed models, BiLDA was trained using Gibbs sampling (Mimno et al., 2009; Ni et al., 2009; Vuli´c et al., 2015). In the training of each model, each variable was first initialized. Here, zl ijm is randomly initialized to an integer between 1 and K, and each of δl ijm, yijj′, and ρl ih is randomly initialized to 0 or 1. We then performed 10,000 Gibbs iterations. We used the symmetric prior αk = 50/K and βl w = 0.01 over θ and ϕl, respectively, in accordance with Vuli´c et al. (2011). The hyperparameters a, b, λ0, and λ1 were set to 0.2, 10, 0.1, and 0.1, respectively, in accordance with Du et al. (2010; 2013). Both η0 and η1 were set to 0.2 as a result of preliminary experiments. We used several values of K to measure the impact of topic size: we used K = 100 and K = 400 in accordance with Liu et al. (2013) in addition to the suggested value K = 2, 000 in Vuli´c et al. (2011). In the translation extraction experiments, 9http://taku910.github.io/mecab/ 10http://www.cis.uni-muenchen.de/ ˜schmid/tools/TreeTagger/ 1271 Model K=100 K=400 K=2,000 BiLDA 693.6 530.7 479.9 BiSTM 520.1 429.3 394.6 BiSTM+TS 537.5 445.3 411.8 Table 2: Test Set Perplexity we used two translation extraction methods, i.e., Cue (Vuli´c et al., 2011) and Liu (Liu et al., 2013). Both methods first infer crosslingual topics for words using a bilingual topic model (BiLDA/BiSTM/BiSTM+TS) and then extract word pairs (we, wf) with a high value of the probability p(we|wf) defined by the inferred topics. Cue calculates p(we|wf) = ∑K k=1 p(we|k)p(k|wf), where p(k|w) ∝ p(w|k) ∑K k=1 p(w|k) and p(w|k) = ϕkw. Liu first converts a document-aligned comparable corpus into a topic-aligned parallel corpus according to the topics of words and computes p(we|wf, k) by running IBM Model 1 on the parallel corpus. Liu then calculates p(we|wf) = ∑K k=1 p(we|wf, k)p(k|wf). Hereafter, a bilingual topic model used in an extraction method is shown in parentheses, e.g., Cue(BiLDA) denotes Cue with BiLDA. 5.2 Experimental Results We evaluated the predictive performance of each model by computing the test set perplexity based on 5-fold cross validation. A lower perplexity indicates better generalization performance. Table 2 shows the perplexity of each model. As can be seen, BiSTM and BiSTM+TS are better than BiLDA in terms of perplexity. We measured the performance of translation extraction with top N accuracy (ACCN ), the number of test words whose top N translation candidates contain a correct translation over the total number of test words (7,930). Table 3 summarizes ACC1 and ACC10 for each model. As can be seen, Cue/Liu(BiSTM) and Cue/Liu(BiSTM+TS) significantly outperform Cue/Liu(BiLDA) (p < 0.01 in the sign test). This indicates that BiSTM and BiSTM+TS improve the performance of translation extraction for both the Cue and Liu methods by assigning more suitable topics. Both experiments prove that capturing segmentlevel alignments is effective for modeling bilingual data. In addition, these experiments show that BiSTM+TS is comparable with BiSTM, indicatACC1 Method K=100 K=400 K=2,000 Cue(BiLDA) 0.024 0.056 0.101 Cue(BiSTM) 0.055 0.112 0.184 Cue(BiSTM+TS) 0.052 0.107 0.176 Liu(BiLDA) 0.206 0.345 0.426 Liu(BiSTM) 0.287 0.414 0.479 Liu(BiSTM+TS) 0.283 0.406 0.467 ACC10 Method K=100 K=400 K=2,000 Cue(BiLDA) 0.093 0.170 0.281 Cue(BiSTM) 0.218 0.286 0.410 Cue(BiSTM+TS) 0.196 0.274 0.398 Liu(BiLDA) 0.463 0.550 0.603 Liu(BiSTM) 0.531 0.625 0.671 Liu(BiSTM+TS) 0.536 0.612 0.667 Table 3: Performance of Translation Extraction Reference y = 1 Reference y = 0 Inference y = 1 195 174 Inference y = 0 43 1132 Table 4: Distribution of Segment-level Alignments ing that the proposed model could yield a significant benefit even if the boundaries of segments are unknown. Tables 2 and 3 show that a larger topic size yields better performance for each model. Furthermore, Liu outperforms Cue regardless of the choice of bilingual topic models, which is consistent with previously reported results (Liu et al., 2013). The results of our experiments demonstrate that the proposed models have the same tendencies as BiLDA. 6 Discussion 6.1 Inferred Segment-level Alignments We created a reference set to evaluate segmentlevel alignments y inferred by BiSTM (K=2,000). We randomly selected 100 document pairs from the comparable corpus and then manually identified cross-lingual alignments between sections. Table 4 shows the distribution of inferred y values and that of y values in the reference set. As can be seen, the accuracy of y is 0.859 (1,327/1,544). The majority of false negatives (121/174) are sections that are not parallel but correspond partially. An example is the alignment between the 1272 Model Japanese article English article BiSTM 4.8 2.9 BiSTM+TS 10.6 4.1 Table 5: Average Number of Segments Japanese section “history” and the English section “Bujutsu (old type of Budo)” in the “Budo (a Japanese martial art)” article pair, where a part of the English section “Bujutsu” is described in the Japanese section “history.” Such errors might not necessarily have a negative effect, because partial alignments can be useful. 6.2 Inferred Segmentation Boundaries This section compares segment boundaries inferred by BiSTM+TS (K=2,000) with section boundaries in the original articles, which have been referred to by BiSTM. The recall of BiSTM+TS for the original section boundaries is 0.727. This indicates that the unsupervised segmentation in BiSTM+TS finds drastic topical changes, i.e., section boundaries, with high recall. Table 5 shows the average number of segments per article for each model. As can be seen, BiSTM+TS divides an article into segments smaller than the original sections. This seems to be reasonable, because some original sections include multiple topics. However, Tables 2 and 3 show that inferred boundaries do not work better than section boundaries. One reason for that is that some errors are caused by a sparseness problem, when BiSTM+TS separates an article into extremely fine-grained segments. In addition, Table 5 reveals that BiSTM+TS increases the gap between languages. Thus, segmentation with a comparable granularity between languages might be favorable for the proposed models. 6.3 Effectiveness for an English–French Wikipedia Corpus We evaluated BiLDA, BiSTM, and BiSTM+TS in terms of perplexity and performance in translation extraction on an English–French Wikipedia corpus to verify the effectiveness of the proposed models for language pairs other than English–Japanese. The settings, e.g., parameters, for each model are the same as in Section 5. Note that we report only the performances of each model with K = 2, 000, because all models achieved the best performances when K = 2, 000. Model Test Set Perplexity BiLDA 439.1 BiSTM 379.4 BiSTM+TS 396.6 Model ACC1 ACC10 Cue(BiLDA) 0.219 0.556 Cue(BiSTM) 0.275 0.580 Cue(BiSTM+TS) 0.257 0.582 Liu(BiLDA) 0.715 0.838 Liu(BiSTM) 0.742 0.859 Liu(BiSTM+TS) 0.732 0.852 Table 6: Performance on an English–French Wikipedia Corpus (K = 2, 000) We collected French articles that correspond to the English articles used in the experiments in Section 5, from the French Wikipedia database dump (2 June 2015) based on inter-language links. As a result, our English–French corpus comprises 3,159 document pairs. The French articles were preprocessed in the same manner as the English articles: text extraction using the open-source script, segmentation using TreeTagger, removal of function words, and lemmatization. We created a gold-standard translation set for translation extraction experiments using Google Translate service11 in a manner similar to that in Gouws et al. (2015) and Coulmance et al. (2015), translating the French words in our corpus using Google Translate, and then eliminating word pairs that do not appear in the document pairs in our corpus. We used the top 1,000 most frequent French words in the resulting gold-standard set as the evaluation input. Table 6 summarizes ACC1, ACC10, and perplexity. It shows that the proposed models are effective also for the English–French Wikipedia corpus. BiSTM and BiSTM+TS outperform BiLDA in terms of perplexity and performance of translation extraction, and BiSTM+TS works well even if the boundaries of segments are unknown. 7 Related Work Multilingual topic models other than BiLDA (Section 2) have been proposed for document-aligned comparable corpora. Fukumasu et al. (2012) applied SwitchLDA (Newman et al., 2006) and Correspondence LDA (Blei and Jordan, 2003), which 11http://translate.google.com/ 1273 were originally intended to work with multimodal data, such as annotated image data, to modeling multilingual text data. They also proposed a symmetric version of Correspondence LDA. Platt et al. (2010) projected monolingual models based on PLSA or Principal Component Analysis into a shared multilingual space with the constraint that document pairs must map to similar locations. Hu et al. (2014) proposed a multilingual tree-based topic model that uses a hierarchical bilingual dictionary in addition to document alignments. Note that these models do not consider segment-level alignments. There are several multilingual topic models tailored for data other than a document-aligned comparable corpus, including bilingual topic models for word alignment and machine translation on parallel sentence pairs (Zhao and Xing, 2006; Zhao and Xing, 2008). Some models have mined multilingual topics from unaligned text data by bridging the gap between different languages using a bilingual dictionary (Jagarlamudi and Daum´e III, 2010; Zhang et al., 2010; Negi, 2011). Boyd-Graber and Blei (2009) used parallel sentences in combination with a bilingual dictionary. However, these models have the drawback that they require a parallel corpus or a bilingual dictionary in advance, which cannot be obtained for some language pairs or domains. In a monolingual setting, some topic models that consider segment-level topics have been proposed. Du et al. (2010) considered a document as a set of segments and generated each per-segment topic distribution from the topic distribution of the related document through a Pitman–Yor process. Others have considered a document as a sequence of segments. Cheng et al. (2009) reflected the underlying sequences of segments’ topics by positing a permutation distribution over a document. Wang et al. (2011) modeled topical sequences in documents with a latent first-order Markov chain, and Du et al. (2012) generated each per-segment topic distribution from the topic distribution of its document and that of its previous segment. Note that none of these models have been extended to a multilingual setting. 8 Conclusions In this paper, we proposed BiSTM, which models a document hierarchically and deals with segmentlevel alignments. BiSTM assigns the same topic distribution to both aligned documents and aligned segments. We also presented an extended model, BiSTM+TS, that infers segmentation boundaries in addition to latent topics by incorporating unsupervised topic segmentation (Du et al., 2013). Our experimental results show that capturing segmentlevel alignments improves perplexity and translation extraction performance, and that BiSTM+TS yields a significant benefit even if the boundaries of segments are not given. This paper presented an extension to BiLDA, but hierarchical structures can also be incorporated into other bilingual topic models (Section 7). As future work, we would like to verify the effectiveness of the proposed models for other datasets or other cross-lingual tasks, such as cross-lingual document classification (Ni et al., 2009; Platt et al., 2010; Ni et al., 2011; Smet et al., 2011) and cross-lingual information retrieval (Vuli´c et al., 2013). Acknowledgments We thank Atsushi Fujita for valuable comments on earlier versions of this manuscript. References David M. Blei and Michael I. Jordan. 2003. Modeling Annotated Data. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pages 127–134. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. Jordan Boyd-Graber and David M. Blei. 2009. Multilingual Topic Models for Unaligned Text. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 75–82. Wray Buntine and Marcus Hutter. 2012. A Bayesian View of the Poisson-Dirichlet Process. http:// arxiv.org/pdf/1007.0296.pdf. Harr Chen, S.R.K. Branavan, Regina Barzilay, and David R. Karger. 2009. Global Models of Document Structure using Latent Permutations. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 371–379. Changyou Chen, Lan Du, and Wray Buntine. 2011. Sampling Table Configurations for the Hierarchical Poisson-Dirichlet Process. In Proceedings of the European Conference on Machine Learning and 1274 Principles and Practice of Knowledge Discovery in Databases 2011, pages 296–311. Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Transgram, Fast Cross-lingual Word-embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1109– 1113. Lan Du, Wray Buntine, and Huidong Jin. 2010. A Segmented Topic Model Based on the Twoparameter Poisson-Dirichlet Process. Machine Learning, 81(1):5–19. Lan Du, Wray Buntine, and Huidong Jin. 2012. Modelling Sequential Text with an Adaptive Topic Model. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 535–545. Lan Du, Wray Buntine, and Mark Johnson. 2013. Topic Segmentation with a Structured Topic Model. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 190–200. Kosuke Fukumasu, Koji Eguchi, and Eric P. Xing. 2012. Symmetric Correspondence Topic Models for Multilingual Text Analysis. In Advances in Neural Information Processing Systems 25, pages 1286– 1294. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast Bilingual Distributed Representations without Word Alignments. In Proceedings of the 32nd International Conference on Machine Learning, pages 748–756. Thomas Hofmann. 1999. Probabilistic Latent Semantic Indexing. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 50–57. Leetsch C. Hsu and Peter Jau-Shyong Shiue. 1998. A Unified Approach to Generalized Stirling Numbers. Advances in Applied Mathematics, 20(3):366–384. Yuening Hu, Ke Zhai, Vladimir Eidelman, and Jordan Boyd-Graber. 2014. Polylingual Tree-Based Topic Models for Translation Domain Adaptation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1166– 1176. Jagadeesh Jagarlamudi and Hal Daum´e III. 2010. Extracting Multilingual Topics from Unaligned Comparable Corpora. In Proceedings of the 32nd European Conference on Advances in Information Retrieval, pages 444–456. Xiaodong Liu, Kevin Duh, and Yuji Matsumoto. 2013. Topic Models + Word Alignment = A Flexible Framework for Extracting Bilingual Dictionary from Comparable Corpus. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 212–221. David Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. 2009. Polylingual Topic Models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 880–889. Sumit Negi. 2011. Mining Bilingual Topic Hierarchies from Unaligned Text. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 992–1000. David Newman, Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. 2006. Statistical Entitytopic Models. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 680–686. Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. 2009. Mining Multilingual Topics from Wikipedia. In Proceedings of the 18th International World Wide Web Conference, pages 1155–1156. Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. 2011. Cross Lingual Text Classification by Mining Multilingual Topics from Wikipedia. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, pages 375–384. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29:19–51. Jim Pitman and Marc Yor. 1997. The Two-Parameter Poisson-Dirichlet Distribution Derived from a Stable Subordinator. The Annals of Probability, 25(2):855–900. John Platt, Kristina Toutanova, and Wen tau Yih. 2010. Translingual Document Representations from Discriminative Projections. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 251–261. Helmut Schmid. 1994. Probabilistic Part-of-Speech Tagging Using Decision Trees. In Proceedings of the International Conference on New Methods in Language Processing, pages 44–49. Wim De Smet, Jie Tang, and Marie-Francine Moens. 2011. Knowledge Transfer Across Multilingual Corpora via Latent Topics. In Proceedings of the 15th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, pages 549– 560. Ivan Vuli´c, Wim De Smet, and Marie-Francine Moens. 2011. Identifying Word Translations from Comparable Corpora Using Latent Topic Models. In Proceedings of the 49th Annual Meeting of the Associ1275 ation for Computational Linguistics: Human Language Technologies, pages 479–484. Ivan Vuli´c, Wim De Smet, and Marie-Francine Moens. 2013. Cross-Language Information Retrieval Models Based on Latent Topic Models Trained with Document-Aligned Comparable Corpora. Information Retrieval, 16(3):331–368. Ivan Vuli´c, Wim De Smet, Jie Tang, and MarieFrancine Moens. 2015. Probabilistic Topic Modeling in Multilingual Settings: An Overview of Its Methodology and Applications. Information Processing & Management, 51(1):111–147. Hongning Wang, Duo Zhang, and ChengXiang Zhai. 2011. Structural Topic Model for Latent Topical Structure Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1526–1535. Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. 2010. Cross-Lingual Latent Topic Extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1128– 1137. Bing Zhao and Eric P. Xing. 2006. BiTAM: Bilingual Topic AdMixture Models for Word Alignment. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 969–976. Bing Zhao and Eric P. Xing. 2008. HM-BiTAM: Bilingual Topic Exploration, Word Alignment, and Translation. In Advances in Neural Information Processing Systems 20, pages 1689–1696. 1276
2016
120
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1277–1287, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning Semantically and Additively Compositional Distributional Representations Ran Tian and Naoaki Okazaki and Kentaro Inui Tohoku University, Japan {tianran, okazaki, inui}@ecei.tohoku.ac.jp Abstract This paper connects a vector-based composition model to a formal semantics, the Dependency-based Compositional Semantics (DCS). We show theoretical evidence that the vector compositions in our model conform to the logic of DCS. Experimentally, we show that vector-based composition brings a strong ability to calculate similar phrases as similar vectors, achieving near state-of-the-art on a wide range of phrase similarity tasks and relation classification; meanwhile, DCS can guide building vectors for structured queries that can be directly executed. We evaluate this utility on sentence completion task and report a new state-of-the-art. 1 Introduction A major goal of semantic processing is to map natural language utterances to representations that facilitate calculation of meanings, execution of commands, and/or inference of knowledge. Formal semantics supports such representations by defining words as some functional units and combining them via a specific logic. A simple and illustrative example is the Dependency-based Compositional Semantics (DCS) (Liang et al., 2013). DCS composes meanings from denotations of words (i.e. sets of things to which the words apply); say, the denotations of the concept drug and the event ban is shown in Figure 1b, where drug is a list of drug names and ban is a list of the subjectcomplement pairs in any ban event; then, a list of banned drugs can be constructed by first taking the COMP column of all records in ban (projection “πCOMP”), and then intersecting the results with drug (intersection “∩”). This procedure defined how words can be combined to form a meaning. Better yet, the procedure can be concisely illustrated by the DCS tree of “banned drugs” (Figure 1a), which is similar to a dependency tree but possesses precise procedural and logical meaning (Section 2). DCS has been shown useful in question answering (Liang et al., 2013) and textual entailment recognition (Tian et al., 2014). Orthogonal to the formal semantics of DCS, distributional vector representations are useful in capturing lexical semantics of words (Turney and Pantel, 2010; Levy et al., 2015), and progress is made in combining the word vectors to form meanings of phrases/sentences (Mitchell and Lapata, 2010; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Socher et al., 2012; Paperno et al., 2014; Hashimoto et al., 2014). However, less effort is devoted to finding a link between vector-based compositions and the composition operations in any formal semantics. We believe that if a link can be found, then symbolic formulas in the formal semantics will be realized by vectors composed from word embeddings, such that similar things are realized by similar vectors; meanwhile, vectors will acquire formal meanings that can directly be used in execution or inference process. Still, to find a link is challenging because any vector compositions that realize such a link must conform to the logic of the formal semantics. In this paper, we establish a link between DCS and certain vector compositions, achieving a vector-based DCS by replacing denotations of words with word vectors, and realizing the composition operations such as intersection and projection as addition and linear mapping, respectively. For example, to construct a vector for “banned drugs”, one takes the word vector vban and multiply it by a matrix MCOMP, corresponding to the projection πCOMP; then, one adds the result to the word vector vdrug to realize the intersection operation (Figure 1c). We provide a method to train the 1277 ban COMP drug ARG banned drugs projection intersection query vector: • dot products sorted drug marijuana cannabis trafficking thalidomide ... answer vectors: coarse-grained candidate ! list for “banned drugs” ARG Aspirin Thalidomide … COMP alcohol Thalidomide … SUBJ government Canada … drug ban ∩ = πCOMP projection intersection denotation of ! “banned drugs” (c) (d) (a) (b) alcohol Thalidomide … Thalidomide … ufood uthalidomide ucannabis … vban vdrug + vbanMCOMP Figure 1: (a) The DCS tree of “banned drugs”, which controls (b) the calculation of its denotation. In this paper, we learn word vectors and matrices such that (c) the same calculation is realized in distributional semantics. The constructed query vector can be used to (d) retrieve a list of coarsegrained candidate answers to that query. word vectors and linear mappings (i.e. matrices) jointly from unlabeled corpora. The rationale for our model is as follows. First, recent research has shown that additive composition of word vectors is an approximation to the situation where two words have overlapping context (Tian et al., 2015); therefore, it is suitable to implement an “and” or intersection operation (Section 3). We design our model such that the resulted distributional representations are expected to have additive compositionality. Second, when intersection is realized as addition, it is natural to implement projection as linear mapping, as suggested by the logical interactions between the two operations (Section 3). Experimentally, we show that vectors and matrices learned by our model exhibit favorable characteristics as compared with vectors trained by GloVe (Pennington et al., 2014) or those learned from syntactic dependencies (Section 5.1). Finally, additive composition brings our model a strong ability to calculate similar vectors for similar phrases, whereas syntactic-semantic roles (e.g. SUBJ, COMP) can be distinguished by different projection matrices (e.g. MSUBJ, MCOMP). We achieve near state-of-the-art performance on a wide range of phrase similarity tasks (Section 5.2) and relation classification (Section 5.3). Furthermore, we show that a vector as constructed above for “banned drugs” can be used as a query vector to retrieve a coarse-grained candiban COMP drug ARG A man sells banned drugs. sell man COMP SUBJ ARG ARG ARG John Mike … COMP Aspirin perfume … SUBJ John Mary … man sell Figure 2: DCS tree for a sentence date list of banned drugs, by sorting its dot products with answer vectors that are also learned by our model (Figure 1d). This is due to the ability of our approach to provide a language model that can find likely words to fill in the blanks such as “ is a banned drug” or “the drug is banned by .. .”. A highlight is the calculation being done as if a query is “executed” by the DCS tree of “banned drugs”. We quantitatively evaluate this utility on sentence completion task (Zweig et al., 2012) and report a new state-of-the-art (Section 5.4). 2 DCS Trees DCS composes meanings from denotations, or sets of things to which words apply. A “thing” (i.e. element of a denotation) is represented by a tuple of features of the form Field=Value, with a fixed inventory of fields. For example, a denotation ban might be a set of tuples ban = {(SUBJ=Canada, COMP=Thalidomide), . . .}, in which each tuple records participants of a banning event (e.g. Canada banning Thalidomide). Operations are applied to sets of things to generate new denotations, for modeling semantic composition. An example is the intersection of pet and fish giving the denotation of “pet fish”. Another necessary operation is projection; by πN we mean a function mapping a tuple to its value of the field N. For example, πCOMP(ban) is the value set of the COMP fields in ban, which consists of banned objects (i.e. {Thalidomide, . . .}). In this paper, we assume a field ARG to be names of things representing themselves, hence for example πARG(drug) is the set of names of drugs. For a value set V , we also consider inverse image π−1 N (V ) := {x | πN(x) ∈V }. For example, D1 := π−1 SUBJ(πARG(man)) consists of all tuples of the form (SUBJ=x, . . .), where x is a man’s name (i.e. x ∈πARG(man)). Thus, sell ∩D1 denotes men’s selling events (i.e. {(SUBJ=John, COMP=Aspirin), . . .} as in Figure 2). Similarly, the denotation of “banned 1278 girl/N ARG like/V SUBJ hat/N red/J ARG ARG COMP SUBJ Kids play on the grass. a red hat which girls like grass/N play/V kid/N ARG SUBJ on ARG (a) (b) Figure 3: DCS trees in this work drugs” as in Figure 1b is formally written as D2 := drug ∩π−1 ARG(πCOMP(ban)), Hence the following denotation D3 := sell ∩D1 ∩π−1 COMP(πARG(D2)) consists of selling events such that the SUBJ is a man and the COMP is a banned drug. The calculation above can proceed in a recursive manner controlled by DCS trees. The DCS tree for the sentence “a man sells banned drugs” is shown in Figure 2. Formally, a DCS tree is defined as a rooted tree in which nodes are denotations of content words and edges are labeled by fields at each ends. Assume a node x has children y1, . . . , yn, and the edges (x, y1), . . . , (x, yn) are labeled by (P1, L1), . . . , (Pn, Ln), respectively. Then, the denotation [[x]] of the subtree rooted at x is recursively calculated as [[x]] := x ∩ n\ i=1 π−1 Pi (πLi([[yi]])). (1) As a result, the denotation of the DCS tree in Figure 2 is the denotation D3 of “a man sells banned drugs” as calculated above. DCS can be further extended to handle phenomena such as quantifiers or superlatives (Liang et al., 2013; Tian et al., 2014). In this paper, we focus on the basic version, but note that it is already expressive enough to at least partially capture the meanings of a large portion of phrases and sentences. DCS trees can be learned from question-answer pairs and a given database of denotations (Liang et al., 2013), or they can be extracted from dependency trees if no database is specified, by taking advantage of the observation that DCS trees are similar to dependency trees (Tian et al., 2014). We use the latter approach, obtaining DCS trees by rule-based conversion from universal dependency (UD) trees (McDonald et al., 2013). Therefore, nodes in a DCS tree are content words in a UD tree, which are in the form of lemma-POS pairs (Figure 3). The inventory of fields is designed to be ARG, SUBJ, COMP, and all prepositions. Prepositions are unlike content words which denote sets of things, but act as relations which we treat similarly as SUBJ and COMP. For example, a prepositional phrase attached to a verb (e.g. play on the grass) is treated as in Figure 3a. The presence of two field labels on each edge of a DCS tree makes it convenient for modeling semantics in several cases, such as a relative clause (Figure 3b). 3 Vector-based DCS For any content word w, we use a query vector vw to model its denotation, and an answer vector uw to model a prototypical element in that denotation. Query vector v and answer vector u are learned such that exp(v · u) is proportional to the probability of u answering the query v. The learning source is a collection of DCS trees, based on the idea that the DCS tree of a declarative sentence usually has non-empty denotation. For example, “kids play” means there exists some kid who plays. Consequently, some element in the play denotation belongs to π−1 SUBJ(πARG(kid)), and some element in the kid denotation belongs to π−1 ARG(πSUBJ(play)). This is a signal to increase the dot product of uplay and the query vector of π−1 SUBJ(πARG(kid)), as well as the dot product of ukid and the query vector of π−1 ARG(πSUBJ(play)). When optimized on a large corpus, the “typical” elements of play and kid should be learned by uplay and ukid, respectively. In general, one has Theorem 1 Assume the denotation of a DCS tree is not empty. Given any path from node x to y, assume edges along the path are labeled by (P, L), . . . , (K, N). Then, an element in the denotation y belongs to π−1 N (πK(. . . (π−1 L (πP(x) . . .). Therefore, for any two nodes in a DCS tree, the path from one to another forms a training example, which signals increasing the dot product of the corresponding query and answer vectors. It is noteworthy that the above formalization happens to be closely related to the skip-gram model (Mikolov et al., 2013b). The skip-gram learns a target vector vw and a context vector uw for each word w. It assumes the probability of a word y co-occurring with a word x in a context window is proportional to exp(vx · uy). Hence, if x and y co-occur within a context window, then one gets a signal to increase vx · uy. If the context window is taken as the same DCS tree, then 1279 the learning of skip-gram and vector-based DCS will be almost the same, except that the target vector vx becomes the query vector v, which is no longer assigned to the word x but the path from x to y in the DCS tree (e.g. the query vector for π−1 SUBJ(πARG(kid)) instead of vkid). Therefore, our model can also be regarded as extending skipgram to take account of the changes of meanings caused by different syntactic-semantic roles. Additive Composition Word vectors trained by skip-gram are known to be semantically additive, such as exhibited in word analogy tasks. An effect of adding up two skip-gram vectors is further analyzed in Tian et al. (2015). Namely, the target vector vw can be regarded as encoding the distribution of context words surrounding w. If another word x is given, vw can be decomposed into two parts, one encodes context words shared with x, and another encodes context words not shared. When vw and vx are added up, the non-shared part of each of them tend to cancel out, because non-shared parts have nearly independent distributions. As a result, the shared part gets reinforced. An error bound is derived to estimate how close 1 2(vw + vx) gets to the distribution of the shared part. We can see the same mechanism exists in vector-based DCS. In a DCS tree, two paths share a context word if they lead to a same node y; semantically, this means some element in the denotation y belongs to both denotations of the two paths (e.g. given the sentence “kids play balls”, π−1 SUBJ(πARG(kid)) and π−1 COMP(πARG(ball)) both contain a playing event whose SUBJ is a kid and COMP is a ball). Therefore, addition of query vectors of two paths approximates their intersection because the shared context y gets reinforced. Projection Generally, for any two denotations X1, X2 and any projection πN, we have πN(X1 ∩X2) ⊆πN(X1) ∩πN(X2). (2) And the “⊆” can often become “=”, for example when πN is a one-to-one map or X1 = π−1 N (V ) for some value set V . Therefore, if intersection is realized by addition, it will be natural to realize projection by linear mapping because (v1 + v2)MN = v1MN + v2MN (3) holds for any vectors v1, v2 and any matrix MN, which is parallel to (2). If πN is realized by a matrix MN, then π−1 N should correspond to the inverse matrix M−1 N , because πN(π−1 N (V )) = V for any value set V . So we have realized all composition operations in DCS. Query vector of a DCS tree Now, we can define the query vector of a DCS tree as parallel to (1): v[[x]] := vx + 1 n n X i=1 v[[yi]]MLiM−1 Pi . (4) 4 Training As described in Section 3, vector-based DCS assigns a query vector vw and an answer vector uw to each content word w. And for each field N, it assigns two matrices MN and M−1 N . For any path from node x to y sampled from a DCS tree, assume the edges along are labeled by (P, L), . . . , (K, N). Then, the dot product vxMPM−1 L . . . MKM−1 N ·uy gets a signal to increase. Formally, we adopt the noise-contrastive estimation (Gutmann and Hyv¨arinen, 2012) as used in the skip-gram model, and mix the paths sampled from DCS trees with artificially generated noise. Then, σ(vxMPM−1 L . . . MKM−1 N ·uy) models the probability of a training example coming from DCS trees, where σ(θ) = 1/{1 + exp(−θ)} is the sigmoid function. The vectors and matrices are trained by maximizing the log-likelihood of the mixed data. We use stochastic gradient descent (Bottou, 2012) for training. Some important settings are discussed below. Noise For any vxM1M−1 2 . . . M2l−1M−1 2l · uy obtained from a path of a DCS tree, we generate noise by randomly choosing an index i ∈[2, 2l], and then replacing Mj or M−1 j (∀j ≥i) and uy by MN(j) or M−1 N(j) and uz, respectively, where N(j) and z are independently drawn from the marginal (i.e. unigram) distributions of fields and words. Update For each data point, when i is the chosen index above for generating noise, we view indices j < i as the ”target” part, and j >= i as the ”context”, which is completely replaced by the noise, as an analogous to the skip-gram model. Then, at each step we only update one vector and one matrix from each of the target, context, and noise part; more specifically, we only update vx, Mi−1 or M−1 i−1, Mi or M−1 i , MN(i) or M−1 N(i), uy and uz, at the step. This is much faster than always updating all matrices. Initialization Matrices are initialized as 1 2(I + G), where I is the identity matrix; and G and all 1280 GloVe no matrix vecDCS vecUD books essay/N novel/N essay/N author novel/N essay/N novel/N published memoir/N anthology/N article/N novel books/N publication/N anthology/N memoir autobiography/N memoir/N poem/N wrote non-fiction/J poem/N autobiography/N biography reprint/V autobiography/N publication/N autobiography publish/V story/N journal/N essay republish/V pamphlet/N memoir/N illustrated chapbook/N tale/N pamphlet/N Table 1: Top 10 similar words to “book/N” vectors are initialized with i.i.d. Gaussians of variance 1/d, where d is the vector dimension. We find that the diagonal component I is necessary to bring information from vx to uy, whereas the randomness of G makes convergence faster. M−1 N is initialized as the transpose of MN. Learning Rate We find that the initial learning rate for vectors can be set to 0.1. But for matrices, it should be less than 0.0005 otherwise the model diverges. For stable training, we rescale gradients when their norms exceed a threshold. Regularizer During training, MN and M−1 N are treated as independent matrices. However, we use the regularizer γ∥M−1 N MN−1 d tr(M−1 N MN)I∥2 to drive M−1 N close to the inverse of MN.1 We also use κ∥M⊥ N MN −1 d tr(M⊥ N MN)I∥2 to prevent MN from having too different scales at different directions (i.e., to drive MN close to orthogonal). We set γ = 0.001 and κ = 0.0001. Despite the rather weak regularizer, we find that M−1 N can be learned to be exactly the inverse of MN, and MN can actually be an orthogonal matrix, showing some semantic regularity (Section 5.1). 5 Experiments For training vector-based DCS, we use Wikipedia Extractor2 to extract texts from the 2015-12-01 dump of English Wikipedia3. Then, we use Stanford Parser4 (Klein and Manning, 2003) to parse all sentences and convert the UD trees into DCS trees by handwritten rules. We assign a weight to each path of the DCS trees as follows. 1Problem with the naive regularizer ∥M −1M −I∥2 is that, when the scale of M goes larger, it will drive M −1 smaller, which may lead to degeneration. So we scale I according to the trace of M −1M. 2http://medialab.di.unipi.it/wiki/ Wikipedia_Extractor 3https://dumps.wikimedia.org/enwiki/ 4http://nlp.stanford.edu/software/ lex-parser.shtml π−1 SUBJ(πARG(house)) π−1 COMP(πARG(house)) π−1 ARG(πin(house)) victorian/J build/V sit/V stand/V rent/V house/N vacant/J leave/V stand/V 18th-century/J burn down/V live/V historic/J remodel/V hang/V old/J demolish/V seat/N georgian/J restore/V stay/V local/J renovate/V serve/V 19th-century/J rebuild/V reside/V tenement/J construct/V hold/V π−1 ARG(πSUBJ(learn)) π−1 ARG(πCOMP(learn)) π−1 about(πARG(learn)) teacher/N skill/N otherness/N skill/N lesson/N intimacy/N he/P technique/N femininity/N she/P experience/N self-awareness/N therapist/N ability/N life/N student/N something/N self-expression/N they/P knowledge/N sadomasochism/N mother/N language/N emptiness/N lesson/N opportunity/N criminality/N father/N instruction/N masculinity/N Table 2: Top 10 answers of high dot products For any path P passing through k intermediate nodes of degrees n1, . . . , nk, respectively, we set Weight(P) := k Y i=1 1 ni −1. (5) Note that ni ≥2 because there is a path P passing through the node; and Weight(P) = 1 if P consists of a single edge. The equation (5) is intended to degrade long paths which pass through several high-valency nodes. We use a random walk algorithm to sample paths such that the expected times a path is sampled equals its weight. As a result, the sampled path lengths range from 1 to 19, average 2.1, with an exponential tail. We convert all words which are sampled less than 1000 times to *UNKNOWN*/POS, and all prepositions occurring less than 10000 times to an *UNKNOWN* field. As a result, we obtain a vocabulary of 109k words and 211 field names. Using the sampled paths, vectors and matrices are trained as in Section 4 (vecDCS). The vector dimension is set to d = 250. We compare with three baselines: (i) all matrices are fixed to identity (“no matrix”), in order to investigate the effects of meaning changes caused by syntactic-semantic roles and prepositions; (ii) the regularizer enforcing M−1 N to be actually the inverse matrix of MN is set to γ = 0 (“no inverse”), in order to investigate the effects of a semantically motivated constraint; and (iii) applying the same training scheme to UD trees directly, by modeling UD relations as matrices (“vecUD”). In this case, one edge is assigned one UD relation rel, so we implement the transfor1281 AN NN VO SVO GS11 GS12 vecDCS 0.51 0.49 0.41 0.62 0.29 0.33 -no matrix 0.52 0.46 0.42 0.62 0.29 0.33 -no inverse 0.47 0.43 0.38 0.58 0.28 0.33 vecUD 0.44 0.46 0.41 0.58 0.25 0.25 GloVe 0.41 0.47 0.41 0.60 0.23 0.17 Grefenstette and Sadrzadeh (2011) 0.21 Blacoe and Lapata (2012):RAE 0.31 0.30 0.28 Grefenstette (2013a) 0.27 Paperno et al. (2014) 0.36 Hashimoto et al. (2014):Waddnl 0.48 0.40 0.39 0.34 Kartsaklis and Sadrzadeh (2014) 0.43 0.41 Table 3: Spearman’s ρ on phrase similarity mation from child to parent by Mrel, and from parent to child by M−1 rel . The same hyper-parameters are used to train vecUD. By comparing vecDCS with vecUD we investigate if applying the semantics framework of DCS makes any difference. Additionally, we compare with the GloVe (6B, 300d) vector5 (Pennington et al., 2014). Norms of all word vectors are normalized to 1 and Frobenius norms of all matrices are normalized to √ d. 5.1 Qualitative Analysis We observe several special properties of the vectors and matrices trained by our model. Words are clustered by POS In terms of cosine similarity, word vectors trained by vecDCS and vecUD are clustered by POS tags, probably due to their interactions with matrices during training. This is in contrast to the vectors trained by GloVe or “no matrix” (Table 1). Matrices show semantic regularity Matrices learned for ARG, SUBJ and COMP are exactly orthogonal, and some most frequent prepositions6 are remarkably close. For these matrices, the corresponding M−1 also exactly converge to their inverse. It suggests regularities in the semantic space, especially because orthogonal matrices preserve cosine similarity – if MN is orthogonal, two words x, y and their projections πN(x), πN(y) will have the same similarity measure, which is semantically reasonable. In contrast, matrices trained by vecUD are only orthogonal for three UD relations, namely conj, dep and appos. Words transformed by matrices To illustrate the matrices trained by vecDCS, we start from the query vectors of two words, house and learn, 5http://nlp.stanford.edu/projects/ glove/ 6of, in, to, for, with, on, as, at, from applying different matrices to them, and show the 10 answer vectors of the highest dot products (Tabel 2). These are the lists of likely words which: take house as a subject, take house as a complement, fills into “ in house”, serve as a subject of learn, serve as a complement of learn, and fills into “learn about ”, respectively. As the table shows, matrices in vecDCS are appropriately learned to map word vectors to their syntacticsemantic roles. 5.2 Phrase Similarity To test if vecDCS has the composition ability to calculate similar things as similar vectors, we conduct evaluation on a wide range of phrase similarity tasks. In these tasks, a system calculates similarity scores for pairs of phrases, and the performance is evaluated as its correlation with human annotators, measured by Spearman’s ρ. Datasets Mitchell and Lapata (2010) create datasets7 for pairs of three types of two-word phrases: adjective-nouns (AN) (e.g. “black hair” and “dark eye”), compound nouns (NN) (e.g. “tax charge” and “interest rate”) and verb-objects (VO) (e.g. “fight war” and “win battle”). Each dataset consists of 108 pairs and each pair is annotated by 18 humans (i.e., 1,944 scores in total). Similarity scores are integers ranging from 1 to 7. Another dataset8 is created by extending VO to SubjectVerb-Object (SVO), and then assessing similarities by crowd sourcing (Kartsaklis and Sadrzadeh, 2014). The dataset GS11 created by Grefenstette and Sadrzadeh (2011) (100 pairs, 25 annotators) is also of the form SVO, but in each pair only the verbs are different (e.g. “man pro7http://homepages.inf.ed.ac.uk/ s0453356/ 8http://www.cs.ox.ac.uk/activities/ compdistmeaning/ 1282 Message-Topic(e1, e2) It is a monthly [report]1 providing [opinion]2 and advice on current United States government contract issues. Message-Topic(e1, e2) The [report]1 gives an account of the silvicultural [work]2 done in Africa, Asia, Australia, South American and the Caribbean. Message-Topic(e1, e2) NUS today responded to the Government’s [announcement]1 of the long-awaited [review]2 of university funding. Component-Whole(e2, e1) The [review]1 published political [commentary]2 and opinion, but even more than that. Message-Topic(e1, e2) It is a 2004 [book]1 criticizing the political and linguistic [writings]2 of Noam Chomsky. Table 4: Similar training instances clustered by cosine similarities between features flight ARG delay ARG smoke SUBJ cause ARG ARG COMP smoke flight ARG delay ARG smoke SUBJ cause ARG ARG COMP ? COMP cause ARG ARG SUBJ flight ARG delay ARG (a) (b) (c) (d) ? Figure 4: For “[smoke]1 cause flight [delay]2”, we construct (a)(b) from subtrees, and (c)(d) from rerooted trees, to form 4 query vectors as feature. vide/supply money”). The dataset GS12 described in Grefenstette (2013a) (194 pairs, 50 annotators) is of the form Adjective-Noun-Verb-AdjectiveNoun (e.g. “local family run/move small hotel”), where only verbs are different in each pair. Our method We calculate the cosine similarity of query vectors corresponding to phrases. For example, the query vector for “fight war” is calculated as vwarMARGM−1 COMP + vfight. For vecUD we use Mnsubj and Mdobj instead of MSUBJ and MCOMP, respectively. For GloVe we use additive compositions. Results As shown in Table 3, vecDCS is competitive on AN, NN, VO, SVO and GS12, consistently outperforming “no inverse”, vecUD and GloVe, showing strong compositionality. The weakness of “no inverse” suggests that relaxing the constraint of inverse matrices may hurt compositionaly, though our preliminary examination on word similarities did not find any difference. The GS11 dataset appears to favor models that can learn from interactions between the subject and object arguments, such as the non-linear model Waddnl in Hashimoto et al. (2014) and the entanglement model in Kartsaklis and Sadrzadeh (2014). However, these models do not show particular advantages on other datasets. The recursive autoencoder (RAE) proposed in Socher et al. (2011) shares an aspect with vecDCS as to construct meanings from parse trees. It is tested by Blacoe and Lapata (2012) for compositionality, where vecDCS appears to be better. NeverthevecDCS 81.2 -no matrix 69.2 -no inverse 79.7 vecUD 69.2 GloVe 74.1 Socher et al. (2012) 79.1 +3 features 82.4 dos Santos et al. (2015) 84.1 Xu et al. (2015) 85.6 Table 5: F1 on relation classification less, we note that “no matrix” performs as good as vecDCS, suggesting that meaning changes caused by syntactic-semantic roles might not be major factors in these datasets, because the syntacticsemantic relations are all fixed in each dataset. 5.3 Relation Classification In a relation classification task, the relation between two words in a sentence needs to be classified; we expect vecDCS to perform better than “no matrix” on this task because vecDCS can distinguish the different syntactic-semantic roles of the two slots the two words fit in. We confirm this conjecture in this section. Dataset We use the dataset of SemEval-2010 Task 8 (Hendrickx et al., 2009), in which 9 directed relations (e.g. Cause-Effect) and 1 undirected relation Other are annotated, 8,000 instances for training and 2,717 for test. Performance is measured by the 9-class direction-aware Macro-F1 score excluding Other class. Our method For any sentence with two words marked as e1 and e2, we construct the DCS tree of the sentence, and take the subtree T rooted at the common ancestor of e1 and e2. We construct four vectors from T, namely: the query vector for the subtree rooted at e1 (resp. e2), and the query vector of the DCS tree obtained from T by rerooting it at e1 (resp. e2) (Figure 4). The four vectors are normalized and concatenated to form the only feature used to train a classifier. For vecUD, we use the corresponding vectors calculated from UD trees. For GloVe, we use the word vector of e1 (resp. e2), and the sum of vectors of all words within the span [e1, e2) (resp. (e1, e2]) as 1283 “banned drugs” “banned movies” “banned books” drug/N bratz/N publish/N marijuana/N porn/N unfair/N cannabis/N indecent/N obscene/N trafficking/N blockbuster/N samizdat/N thalidomide/N movie/N book/N smoking/N idiots/N responsum/N narcotic/N blacklist/N illegal/N botox/N grindhouse/N reclaiming/N doping/N doraemon/N redbook/N Table 6: Answers for composed query vectors the four vectors. Classifier is SVM9 with RBF kernel, C = 2 and Γ = 0.25. The hyper-parameters are selected by 5-fold cross validation. Results VecDCS outperforms baselines on relation classification (Table 5). It makes 16 errors in misclassifying the direction of a relation, as compared to 144 such errors made by “no matrix”, 23 by “no inverse”, 30 by vecUD, and 161 by GloVe. This suggests that models with syntactic-semantic transformations (i.e. vecDCS, “no inverse”, and vecUD) are indeed good at distinguishing the different roles played by e1 and e2. VecDCS scores moderately lower than the state-of-the-art (Xu et al., 2015), however we note that these results are achieved by adding additional features and training task-specific neural networks (dos Santos et al., 2015; Xu et al., 2015). Our method only uses features constructed from unlabeled corpora. From this point of view, it is comparable to the MV-RNN model (without features) in Socher et al. (2012), and vecDCS actually does better. Table 4 shows an example of clustered training instances as assessed by cosine similarities between their features. It suggests that the features used in our method can actually cluster similar relations. 5.4 Sentence Completion If vecDCS can compose query vectors of DCS trees, one should be able to “execute” the vectors to get a set of answers, as the original DCS trees can do. This is done by taking dot products with answer vectors and then ranking the answers. Examples are shown in Table 6. Since query vectors and answer vectors are trained from unlabeled corpora, we can only obtain a coarsegrained candidate list. However, it is noteworthy that despite a common word “banned” shared by the phrases, their answer lists are largely different, suggesting that composition actually can be done. Moreover, some words indeed answer the queries 9https://www.csie.ntu.edu.tw/˜cjlin/ libsvm/ vecDCS 50 -no matrix 60 -no inverse 46 vecUD 31 N-gram (Various) 39-41 Zweig et al. (2012) 52 Mnih and Teh (2012) 55 Gubbins and Vlachos (2013) 50 Mikolov et al. (2013a) 55 Table 7: Accuracy (%) on sentence completion (e.g. Thalidomide for “banned drugs” and Samizdat for “banned books”). Quantitatively, we evaluate this utility of executing queries on the sentence completion task. In this task, a sentence is presented with a blank that need to be filled in. Five possible words are given as options for each blank, and a system needs to choose the correct one. The task can be viewed as a coarse-grained question answering or an evaluation for language models (Zweig et al., 2012). We use the MSR sentence completion dataset10 which consists of 1,040 test questions and a corpus for training language models. We train vecDCS on this corpus and use it for evaluation. Results As shown in Table 7, vecDCS scores better than the N-gram model and demonstrates promising performance. However, to our surprise, “no matrix” shows an even better result which is the new state-of-the-art. Here we might be facing the same problem as in the phrase similarity task (Section 5.2); namely, all choices in a question fill into the same blank and the same syntactic-semantic role, so the transforming matrices in vecDCS might not be able to distinguish different choices; on the other hand, vecDCS would suffer more from parsing and POS-tagging errors. Nonetheless, we believe the result by “no matrix” reveals a new horizon of sentence completion, and suggests that composing semantic vectors according to DCS trees could be a promising direction. 6 Discussion We have demonstrated a way to link a vector composition model to a formal semantics, combining the strength of vector representations to calculate phrase similarities, and the strength of formal semantics to build up structured queries. In this section, we discuss several lines of previous research related to this work. 10http://research.microsoft.com/en-us/ projects/scc/ 1284 Logic and Distributional Semantics Logic is necessary for implementing the functional aspects of meaning and organizing knowledge in a structured and unambiguous way. In contrast, distributional semantics provides an elegant methodology for assessing semantic similarity and is well suited for learning from data. There have been repeated calls for combining the strength of these two approaches (Coecke et al., 2010; Baroni et al., 2014; Liang and Potts, 2015), and several systems (Lewis and Steedman, 2013; Beltagy et al., 2014; Tian et al., 2014) have contributed to this direction. In the remarkable work by Beltagy et al. (to appear), word and phrase similarities are explicitly transformed to weighted logical rules that are used in a probabilistic inference framework. However, this approach requires considerable amount of engineering, including the generation of rule candidates (e.g. by aligning sentence fragments), converting distributional similarities to weights, and efficiently handling the rules and inference. What if the distributional representations are equipped with a logical interface, such that the inference can be realized by simple vector calculations? We have shown it possible to realize semantic composition; we believe this may lead to significant simplification of the system design for combining logic and distributional semantics. Compositional Distributional Models There has been active exploration on how to combine word vectors such that adequate phrase/sentence similarities can be assessed (Mitchell and Lapata, 2010, inter alia), and there is nothing new in using matrices to model changes of meanings. However, previous model designs mostly rely on linguistic intuitions (Paperno et al., 2014, inter alia), whereas our model has an exact logic interpretation. Furthermore, by using additive composition we enjoy a learning guarantee (Tian et al., 2015). Vector-based Logic Models This work also shares the spirit with Grefenstette (2013b) and Rocktaeschel et al. (2014), in exploring vector calculations that realize logic operations. However, the previous works did not specify how to integrate contextual distributional information, which is necessary for calculating semantic similarity. Formal Semantics Our model implements a fragment of logic capable of semantic composition, largely due to the simple framework of Dependency-based Compositional Semantics (Liang et al., 2013). It fits in a long tradition of logic-based semantics (Montague, 1970; Dowty et al., 1981; Kamp and Reyle, 1993), with extensive studies on extracting semantics from syntactic representations such as HPSG (Copestake et al., 2001; Copestake et al., 2005) and CCG (Baldridge and Kruijff, 2002; Bos et al., 2004; Steedman, 2012; Artzi et al., 2015; Mineshima et al., 2015). Logic for Natural Language Inference The pursue of a logic more suitable for natural language inference is also not new. For example, MacCartney and Manning (2008) has implemented a model of natural logic (Lakoff, 1970). We would not reach the current formalization of logic of DCS without reading the work by Calvanese et al. (1998), which is an elegant formalization of database semantics in description logic. Semantic Parsing DCS-related representations have been actively used in semantic parsing and we see potential in applying our model. For example, Berant and Liang (2014) convert λ-DCS queries to canonical utterances and assess paraphrases at the surface level; an alternative could be using vector-based DCS to bring distributional similarity directly into calculation of denotations. We also borrow ideas from previous work, for example our training scheme is similar to Guu et al. (2015) in using paths and composition of matrices, and our method is similar to Poon and Domingos (2009) in building structured knowledge from clustering syntactic parse of unlabeled data. Further Applications Regarding the usability of distributional representations learned by our model, a strong point is that the representation takes into account syntactic/structural information of context. Unlike several previous models (Pad´o and Lapata, 2007; Levy and Goldberg, 2014; Pham et al., 2015), our approach learns matrices at the same time that can extract the information according to different syntactic-semantic roles. A related application is selectional preference (Baroni and Lenci, 2010; Lenci, 2011; Van de Cruys, 2014), wherein our model might has potential for smoothly handling composition. Reproducibility Find our code at https:// github.com/tianran/vecdcs Acknowledgments This work was supported by CREST, JST. We thank the anonymous reviewers for their valuable comments. 1285 References Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of EMNLP. Jason Baldridge and Geert-Jan Kruijff. 2002. Coupling ccg and hybrid logic dependency semantics. In Proceedings of ACL. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpusbased semantics. Computational Linguistics, 36(4). Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP. Marco Baroni, Raffaella Bernardi, and Roberto Zamparelli. 2014. Frege in space: A program for compositional distributional semantics. Linguistic Issues in Language Technology, 9(6). Islam Beltagy, Katrin Erk, and Raymond Mooney. 2014. Probabilistic soft logic for semantic textual similarity. In Proceedings of ACL. Islam Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J. Mooney. to appear. Representing meaning with a combination of logical form and vectors. Computational Linguistics, special issue on formal distributional semantics. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of ACL. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of EMNLP-CoNLL. Johan Bos, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Widecoverage semantic representations from a ccg parser. In Proceedings of ICCL. L´eon Bottou. 2012. Stochastic gradient descent tricks. In Gr´egoire Montavon, Genevi`eve B. Orr, and Klaus-Robert M¨uller, editors, Neural Networks: Tricks of the Trade. Springer, Berlin. Diego Calvanese, Giuseppe De Giacomo, and Maurizio Lenzerini. 1998. On the decidability of query containment under constraints. In Proceedings of the 17th ACM SIGACT SIGMOD SIGART Symposium on Principles of Database Systems (PODS98). Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis. Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of ACL. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation, 3(2-3). Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of ACLIJCNLP. David R. Dowty, Robert E. Wall, and Stanley Peters. 1981. Introduction to Montague Semantics. Springer Netherlands. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of EMNLP. Edward Grefenstette. 2013a. Category-Theoretic Quantitative Compositional Distributional Models of Natural Language Semantics. PhD thesis. Edward Grefenstette. 2013b. Towards a formal distributional semantics: Simulating logical calculi with tensors. In Proceedings of *SEM. Joseph Gubbins and Andreas Vlachos. 2013. Dependency language models for sentence completion. In Proceedings of EMNLP. Michael U. Gutmann and Aapo Hyv¨arinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. J. Mach. Learn. Res., 13(1). Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of EMNLP. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2014. Jointly learning word representations and composition functions using predicate-argument structures. In Proceedings of EMNLP. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009). Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic. Springer Netherlands. Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2014. A study of entanglement in a categorical framework of natural language. In Proceedings of the 11th Workshop on Quantum Physics and Logic (QPL). Dan Klein and Christopher D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Advances in NIPS. 1286 George Lakoff. 1970. Linguistics and natural logic. Synthese, 22(1-2). Alessandro Lenci. 2011. Composing and updating verb argument expectations: A distributional semantic model. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of ACL. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of ACL, 3. Mike Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. Transactions of ACL, 1. Percy Liang and Christopher Potts. 2015. Bringing machine learning and compositional semantics together. Annual Review of Linguistics, 1. Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2). Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of Coling. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings ACL. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in NIPS. Koji Mineshima, Pascual Mart´ınez-G´omez, Yusuke Miyao, and Daisuke Bekki. 2015. Higher-order logical inference with compositional semantics. In Proceedings of EMNLP. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8). Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In In Proceedings of ICML. Richard Montague. 1970. Universal grammar. Theoria, 36. Sebastian Pad´o and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2). Denis Paperno, Nghia The Pham, and Marco Baroni. 2014. A practical and linguistically-motivated approach to compositional distributional semantics. In Proceedings of ACL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Nghia The Pham, Germ´an Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly optimizing word representations for lexical and sentential tasks with the c-phrase model. In Proceedings of ACL. Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of EMNLP. Tim Rocktaeschel, Matko Bosnjak, Sameer Singh, and Sebastian Riedel. 2014. Low-dimensional embeddings of logic. In ACL Workshop on Semantic Parsing (SP’14). Richard Socher, Eric H. Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y. Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in NIPS. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP. Mark Steedman. 2012. Taking Scope - The Natural Semantics of Quantifiers. MIT Press. Ran Tian, Yusuke Miyao, and Takuya Matsuzaki. 2014. Logical inference on dependency-based compositional semantics. In Proceedings of ACL. Ran Tian, Naoaki Okazaki, and Kentaro Inui. 2015. The mechanism of additive composition. arXiv:1511.08407. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37(1). Tim Van de Cruys. 2014. A neural network approach to selectional preference acquisition. In Proceedings of EMNLP. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proceedings of EMNLP. Geoffrey Zweig, John C. Platt, Christopher Meek, Christopher J.C. Burges, Ainur Yessenalina, and Qiang Liu. 2012. Computational approaches to sentence completion. In Proceedings of ACL. 1287
2016
121
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1288–1297, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Inner Attention based Recurrent Neural Networks for Answer Selection Bingning Wang, Kang Liu, Jun Zhao National Laboratory of Pattern Recognition, Institute of Automation Chinese Academy of Sciences, Beijing, China {bingning.wang, kliu, jzhao}@nlpr.ia.ac.cn Abstract Attention based recurrent neural networks have shown advantages in representing natural language sentences (Hermann et al., 2015; Rockt¨aschel et al., 2015; Tan et al., 2015). Based on recurrent neural networks (RNN), external attention information was added to hidden representations to get an attentive sentence representation. Despite the improvement over nonattentive models, the attention mechanism under RNN is not well studied. In this work, we analyze the deficiency of traditional attention based RNN models quantitatively and qualitatively. Then we present three new RNN models that add attention information before RNN hidden representation, which shows advantage in representing sentence and achieves new stateof-art results in answer selection task. 1 Introduction Answer selection (AS) is a crucial subtask of the open domain question answering (QA) problem. Given a question, the goal is to choose the answer from a set of pre-selected sentences (Heilman and Smith, 2010; Yao et al., 2013). Traditional AS models are based on lexical features such as parsing tree edit distance. Neural networks based models are proposed to represent the meaning of a sentence in a vector space and then compare the question and answer candidates in this hidden space (Wang and Nyberg, 2015; Feng et al., 2015), which have shown great success in AS. However, these models represent the question and sentence separately, which may ignore the information subject to the question when representing the answer. For example, given a candidate answer: Michael Jordan abruptly retired from Chicago Bulls before the beginning of the 1993-94 NBA season to pursue a career in baseball. For a question: When did Michael Jordan retired from NBA? we should focus on the beginning of the 1993-94 in the sentence; however, when we were asked: Which sports does Michael Jordan participates after his retirement from NBA? we should pay more attention to pursue a career in baseball. Recent years, attention based models are proposed in light of this purpose and have shown great success in many NLP tasks such as machine translation (Bahdanau et al., 2014; Sutskever et al., 2014), question answering (Sukhbaatar et al., 2015) and recognizing textual entailments (Rockt¨aschel et al., 2015). When building the representation of a sentence, some attention information is added to the hidden state. For example, in attention based recurrent neural networks models (Bahdanau et al., 2014) each time-step hidden representation is weighted by attention. Inspired by the attention mechanism, some attention-based RNN answer selection models have been proposed (Tan et al., 2015) in which the attention when computing answer representation is from question representation. However, in the RNN architecture, at each time step a word is added and the hidden state is updated recurrently, so those hidden states near the end of the sentence are expected to capture more information1. Consequently, after adding the attention information to the time sequence hidden representations, the near-the-end hidden variables will be more attended due to their comparatively abundant semantic accumulation, which may result in a biased attentive weight towards the later coming words in RNN. In this work, we analyze this attention bias 1so in many previous RNN-based model use the last hidden variable as the whole sentence representation 1288 problem qualitatively and quantitatively, and then propose three new models to solve this problem. Different from previous attention based RNN models in which attention information is added after RNN computation, we add the attention before computing the sentence representation. Concretely, the first one uses the question attention to adjust word representation (i.e. word embedding) in the answer directly, and then we use RNN to model the attentive word sequence. However, this model attends a sentence word by word which may ignore the relation between words. For example, if we were asked: what is his favorite food? one answer candidate is: He likes hot dog best. hot or dog may be not relate to the question by itself, but they are informative as a whole in the context. So we propose the second model in which every word representation in answer is impacted by not only question attention but also the context representation of the word (i.e. the last hidden state). In our last model, inspired by previous work on adding gate into inner activation of RNN to control the long and short term information flow, we embed the attention to the inner activation gate of RNN to influence the computation of RNN hidden representation. In addition, inspired by recent work called Occam’s Gate in which the activation of input units are penalized to be as less as possible, we add regulation to the summation of the attention weights to impose sparsity. Overall, in this work we make three contributions: (1) We analyze the attention bias problem in traditional attention based RNN models. (2) We propose three inner attention based RNN models and achieve new state-of-the-art results in answer selection. (3) We use Occam’s Razor to regulate the attention weights which shows advantage in long sentence representation. 2 Related Work Recent years, many deep learning framework has been developed to model the text in a vector space, and then use the embedded representations in this space for machine learning tasks. There are many neural networks architectures for this representation such as convolutional neural networks(Yin et al., 2015), recursive neural networks(Socher et al., 2013) and recurrent neural networks(Mikolov et al., 2011). In this work we propose Inner Attention based RNN (IARNN) for answer selection, and there are two main works which we are related to. 2.1 Attention based Models Many recent works show that attention techniques can improve the performance of machine learning models (Mnih et al., 2014; Zheng et al., 2015). In attention based models, one representation is built with attention (or supervision) from other representation. Weston et al (2014) propose a neural networks based model called Memory Networks which uses an external memory to store the knowledge and the memory are read and written on the fly with respect to the attention, and these attentive memory are combined for inference. Since then, many variants have been proposed to solve question answering problems (Sukhbaatar et al., 2015; Kumar et al., 2015). Hermann (2015) and many other researchers (Tan et al., 2015; Rockt¨aschel et al., 2015) try to introduce the attention mechanism into the LSTM-RNN architecture. RNN models the input sequence word-by-word and updates its hidden variable recurrently. Compared with CNN, RNN is more capable of exploiting long-distance sequential information. In attention based RNN models, after computing each time step hidden representation, attention information is added to weight each hidden representation, then the hidden states are combined with respect to that weight to obtain the sentence (or document) representation. Commonly there are two ways to get attention from source sentence, either by the whole sentence representation (which they call attentive) or word by word attention (called impatient). 2.2 Answer Selection Answer selection is a sub-task of QA and many other tasks such as machine comprehension. Given a question and a set of candidate sentences, one should choose the best sentence from a candidate sentence set that can answer the question. Previous works usually stuck in employing feature engineering, linguistic tools, or external resources. For example, Yih et al. (2013) use semantic features from WordNet to enhance lexical features. Wang and Manning (2007) try to compare the question and answer sentence by their syntactical matching in parse trees. Heilman and Smith (Heilman and Smith, 2010) try to fulfill the matching using minimal edit sequences between their dependency parse trees. Severyn and Moschitti (2013) automate the extraction of discriminative tree-edit features over parsing trees. 1289 While these methods show effectiveness, they might suffer from the availability of additional resources and errors of many NLP tools such as dependency parsing. Recently there are many works use deep learning architecture to represent the question and answer in a same hidden space, and then the task can be converted into a classification or learning-to-rank problem (Feng et al., 2015; Wang and Nyberg, 2015). With the development of attention mechanism, Tan et.al(2015) propose an attention-based RNN models which introduce question attention to answer representation. 3 Traditional Attention based RNN Models and Their Deficiency The attention-based models introduce the attention information into the representation process. In answer selection, given a question Q = {q1, q2, q3, ..., qn} where qi is i-th word, n is the question length, we can compute its representation in RNN architecture as follows: X = D[q1, q2, ..., qn] ht = σ(Wihxt + Whhht−1 + bh) yt = σ(Whoht + bo) (1) where D is an embedding matrix that projects word to its embedding space in Rd; Wih, Whh, Who are weight matrices and bh, bo are bias vectors; σ is active function such as tanh. Usually we can ignore the output variables and use the hidden variables. After recurrent process, the last hidden variable hn or all hidden states average 1 n Pn t=1 ht is adopted as the question representation rq. When modeling the candidate answer sentence with length m:S = {s1, s2, s3, ..., sm} in attention based RNN model, instead of using the last hidden state or average hidden states, we use attentive hidden states that are weighted by rq: Ha = [ha(1), ha(2), ..., ha(m)] st ∝fattention(rq, ha(t)) ˜ha(t) = ha(t)st ra = m X t=1 ˜ha(t) (2) where ha(t) is hidden state of the answer at time t. In many previous work (Hermann et al., 2015; Rockt¨aschel et al., 2015; Tan et al., 2015), the atQuestion RNN Answer ave|max rq Attention SUM ra cosine Figure 1: Traditional attention based RNN answer selection model. Dark blue rectangles represent hidden virable, ⊗means gate opperation. tention function fattention was computed as: m(t) = tanh(Whmha(t) + Wqmrq) fattention(rq, ha(t)) = exp(wT msm(t)) (3) Whm and Wqm are attentive weight matrices and wms is attentive weight vector. So we can expect that the candidate answer sentence representation ra may be represented in a question-guided way: when its hidden state ha(t) is irrelevant to the question (determined by attention weight st), it will take less part in the final representation; but when this hidden state is relavent to the question, it will contribute more in representing ra. We call this type of attention based RNN model OARNN which stands for Outer Attention based RNN models because this kind of model adds attention information outside the RNN hidden representation computing process. An illustration of traditional attention-based RNN model is in Figure 1. However, we know in the RNN architecture, the input words are processed in time sequence and the hidden states are updated recurrently, so the current hidden state ht is supposed to contain all the information up to time t, when we add question attention information, aiming at finding the useful part of the sentence, these near-the-end hidden states are prone to be selected because they contains much more information about the whole sentence. In other word, if the question pays attention to the hidden states at time t , then it should also pay attention to those hidden states after t (i.e {ht′|t ′ > t}) as they contain the information at least as much as ht, but in answer selection for a specific candidate answer, the useful parts to answer the question may be located anywhere in a sentence, so the attention should also distribute uniformly around the sentence. Traditional attention-based RNN models under attention after representation mechanism may cause the attention to bias towards the later coming hidden states. We will analyze this attention bias problem quantita1290 tively in the experiments. 4 Inner Attention based Recurrent Neural Networks In order to solve the attention bias problem, we propose an intuition: Attention before representation Instead of adding attention information after encoding the answer by RNN, we add attention before computing the RNN hidden representations. Based on this intuition, we propose three inner attention based RNN models detailed below. 4.1 IARNN-WORD As attention mechanism aims at finding useful part of a sentence, the first model applies the above intuition directly. Instead of using the original answer words to the RNN model, we weight the words representation according to question attention as follows: αt = σ(rT q Mqixt) ˜xt = αt ∗xt (4) where Mqi is an attention matrix to transform a question representaion into the word embedding space. Then we use the dot value to determine the question attention strength, σ is sigmoid function to normalize the weight αt between 0 and 1. The above attention process can be understood as sentence distillation where the input words are distilled (or filtered) by question attention. Then, we can represent the whole sentence based on this distilled input using traditional RNN model. In this work, we use GRU instead of LSTM as building block for RNN because it has shown advantages in many tasks and has comparatively less parameter(Jozefowicz et al., 2015) which is formulated as follows: zt = σ(Wxz˜xt + Whzht−1) ft = σ(Wxf˜xt + Whfht−1) ˜ht = tanh(Wxh˜xt + Whh(ft ⊙ht−1)) ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht (5) where Wxz, Whz, Wxf, Whh, Wxh are weight matrices and ⊙stands for element-wise multiplication. Finally, we get candidate answer representation by average pooling all the hidden state ht. we call this model IARNN-WORD as the attention is paid to the original input words. This model is shown in Figure 2. rq Attention ra ave ෨𝑋𝑡 𝑋𝑡 ℎ𝑡 GRU Figure 2: IARNN-WORD architecture. rq is question representation. rq Attention ra ave ෨𝑋𝑡 𝑋𝑡 ℎ𝑡 GRU h0 Figure 3: IARNN-CONTEXT architecture for building candidate answer sentence representation. h0 is added for completeness. 4.2 IARNN-CONTEXT IABRNN-WORD attend input word embedding directly. However, the answer sentence may consist of consecutive words that are related to the question, and a word may be irrelevant to question by itself but relevant in the context of answer sentence. So the above word by word attention mechanism may not capture the relationship between multiple words. In order to import contextual information into attention process, we modify the attention weights in Equation 4 with additional context information: wC(t) = Mhcht−1 + Mqcrq αt C = σ(wT C(t)xt) ˜xt = αt C ∗xt (6) where we use ht−1 as context, Mhc and Mqc are attention weight matrices, wC(t) is the attention representation which consists of both question and word context information. This additional context attention endows our model to capture relevant part in longer text span. We show this model in Figure 3. 1291 4.3 IARNN-GATE Inspired by the previous work of LSTM (Hochreiter and Schmidhuber, 1997) on solving the gradient exploding problem in RNN and recent work on building distributed word representation with topic information(Ghosh et al., 2016), instead of adding attention information to the original input, we can apply attention deeper to the GRU inner activation (i.e zt and ft). Because these inner activation units control the flow of the information within the hidden stage and enables information to pass long distance in a sentence, we add attention information to these active gates to influence the hidden representation as follows: zt = σ(Wxzxt + Whzht−1+Mqzrq) ft = σ(Wxfxt + Whfht−1+Mqfrq) ˜ht = tanh(Wxhxt + Whh(ft ⊙ht−1)) ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht (7) where Mqz and Mhz are attention weight matrices. In this way, the update and forget units in GRU can focus on not only long and short term memory but also the attention information from the question. The architecture is shown in Figure 4. 4.4 IARNN-OCCAM In answer selection, the answer sentence may only contain small number of words that are related to the question. In IARNN-WORD and IARNN-CONTEXT, we calculate each word attention weight without considering total weights. Similar with Raiman(2015) who adds regulation to the input gate, we punish the summation of the attention weights to enforce sparsity. This is an application of Occam’s Razor: Among the whole words set, we choose those with fewest number that can represent the sentence. However, assigning a pre-defined hyper-parameter for this regulation2 is not an ideal way because it punishes all question attention weights with same strength. For different questions there may be different number of snippets in candidate answer that are required. For example, when the question type is When or Who, answer sentence may only contains a little relavant words so we should impose more sparsity on the summation of the attention. But when the 2For example, in many machine learning problem the original objective sometimes followed with a L1 or L2 regulation with hyper-parameter λ1 or λ2 to control the tradeoff between the original objective J and the sparsity criterion:J∗= J + (λ1|λ2) P(L1|L2norm) rq ht xt ht-1 xt ht-1 zt ft ෨ℎ𝑡 1Attention GRU Figure 4: IABRNN-GATE architecture. We show one time step GRU inner state process within the blue dotted line. question type is Why or How, there may be much more words on the sentence that are relevant to the question so we should set the regulation value small accordingly. In this work, this attention regulation is added as follows: for the specific question Qi and its representation ri q, we use a vector wqp to project it into scalar value ni p, and then we add it into the original objective Ji as follows: ni p = max{wT qpri q, λq} J∗ i = Ji + ni p mc X t=1 αi t (8) where αi t is attention weights in Equation 4 and Equation 6. λq is a small positive hyper-parameter. It needs to mention that we do not regulate IARNN-GATE because the attention has been embedded to gate activation. 5 Experiments 5.1 Quantify Traditional Attention based Model Bias Problem In order to quantify the outer attention based RNN model’s attention bias problem in Section 3, we build an outer attention based model similar with Tan (2015). First of all, for the question we build its representation by averaging its hidden states in LSTM, then we build the candidate answer sentence representation in an attentive way introduced in Section 3. Next we use the cosine similarity to compare question and answer representation similarity. Finally, we adopt max-margin hinge loss as objective: L = max{0,M −cosine(rq, ra+) + cosine(rq, ra−)} (9) 1292 position in a sentence 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 attention weight #10-4 0.8 0.9 1 1.1 1.2 1.3 1.4 Bi-directional OARNN Bi-directional IARNN-WORD start end Figure 5: One directional OARNN attention distribution, the horizontal axis is position of word in a sentence that has been normalized from 1 to 10000. position in a sentence 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 attention weight #10-4 0.8 0.9 1 1.1 1.2 1.3 1.4 Bi-directional OARNN Bi-directional IARNN-WORD start end Figure 6: Bi-directional OARNN attention distribution, the horizontal axis is the postion of the word in a sentence that has been normalized from 1 to 10000. where a+ is ground truth answer candidate and a−stands for negative one, the scalar M is a predefined margin. When training result saturates after 50 epoches, we get the attention weight distribution (i.e. sq in Equation 2). The experiment is conducted on two answer selection datasets: WikiQA (Yang et al., 2015) and TrecQA (Wang et al., 2007). The normalized attention weights is reported in Figure 5. However, the above model use only forward LSTM to build hidden state representation, the attention bias problem may attribute to the biased answer distribution: the useful part of the answer to the question sometimes may located at the end of the sentence. So we try OARNN in bidirectional architecture, where the forward LSTM and backward LSTM are concatenated for hidden representation, The bidirectional attention based LSTM attention distribution is shown in Figure 6. Analysis: As is shown in Figure 5 and 6, for one-directional OARNN, as we move from beginning to the end in a sentence, the question attention gains continuously; when we use bidirectional OARNN, the hidden representations near two ends of a sentence get more attention. This is consistent with our assumption that for a hidden representation in RNN, the closer to the end of a sentence, the more attention it should drawn from question. But the relevant part may be located anywhere in a answer. As a result, when the sample size is large enough3, the attention weight should be unformly distributed. The traditional attention after representation style RNN may suffer from the biased attention problem. Our IARNN models are free from this problem and distribute nearly uniform (orange line) in a sentence. 5.2 IARNN evaluation Common Setup: We use the off-the-shelf 100dimension word embeddings from word2vec4, and initiate all weights and attention matrices by fixing their largest singular values to 1 (Pascanu et al., 2013). IARNN-OCCAM base regulation hyperparameter λq is set to 0.05, we add L2 penalty with a coefficient of 10−5 . Dropout (Srivastava et al., 2014) is further applied to every parameters with probability 30%. We use Adadelta(Zeiler, 2012) with ρ = 0.90 to update parameters. We choose three datasets for evaluation: InsuranceQA, WikiQA and TREC-QA. These datasets contain questions from different domains. Table 1 presents some statistics about these datasets. We adopt a max-margin hinge loss as training objective. The results are reported in terms of MAP and MRR in WikiQA and TREC-QA and accuracy in InsuranceQA. We use bidirectional GRU for all models. We share the GRU parameter between question and answer which has shown significant improvement on performance and convergency rate (Tan et al., 2015; Feng et al., 2015). There are two common baseline systems for above three datasets: • GRU: A non-attentive GRU-RNN that models the question and answer separately. • OARNN: Outer attention-based RNN models (OARNN) with GRU which is detailed in Section 5.1. WikiQA (Yang et al., 2015) is a recently released open-domain question answering 310000 for WikiQA and 5000 for TrecQA in experiment. 4https://code.google.com/archive/p/word2vec/ 1293 Dataset(train / test / dev) InsuranceQA WikiQA TREC-QA # of questions 12887 / 1800x2 /1000 873 / 243 / 126 78 / 68 / 65 # of sentences 24981(ALL) 20360 / 6165 / 2733 5919 / 1442 / 1117 Ave length of question 7.16 7.16 / 7.26 / 7.23 11.39 / 8.63 / 8.00 Ave length of sentence 49.5 25.29 / 24.59 / 24.59 30.39 / 25.61 / 24.9 Table 1: The statistics of three answer selection datasets. For the TREC-QA, we use the cleaned dataset that has been edit by human. For WikiQA and TREC-QA we remove all the questions that has no right or wrong answers. System MAP MRR (Yang et al., 2015) 0.652 0.6652 (Yin et al., 2015) 0.6921 0.7108 (Santos et al., 2016) 0.6886 0.6957 GRU 0.6581 0.6691 OARNN 0.6881 0.7013 IARNN-word 0.7098 0.7234 IARNN-Occam(word) 0.7121 0.7318 IARNN-context 0.7182 0.7339 IARNN-Occam(context) 0.7341 0.7418 IARNN-Gate 0.7258 0.7394 Table 2: Performances on WikiQA dataset in which all answers are collected from Wikipedia. In addition to the original (question,positive,negative) triplets, we randomly select a bunch of negative answer candidates from answer sentence pool and finally we get a relatively abundant 50,298 triplets. We use cosine similarity to compare the question and candidate answer sentence. The hidden variable’s length is set to 165 and batch size is set to 1. We use sigmoid as GRU inner active function, we keep word embedding fixed during training. Margin M was set to 0.15 which is tuned in the development set. We adopt three additional baseline systems applied to WikiQA: (1) A bigram CNN models with average pooling(Yang et al., 2015). (2) An attention-based CNN model which uses an interactive attention matrix for both question and answer(Yin et al., 2015) 5 (3) An attention based CNN models which builds the attention matrix after sentence representation(Santos et al., 2016). The result is shown in Table 2. InsuranceQA (Feng et al., 2015) is a domain specific answer selection dataset in which all questions is related to insurance. Its vocabulary size is comparatively small (22,353), we set the batch size to 16 and the hidden variable size to 145, hinge loss margin M is adjusted to 0.12 by evaluation behavior. Word embeddings are also learned during training. We adopt the Geometric mean of Euclidean and Sigmoid Dot (GESD) proposed in (Feng et al., 2015) to measure the similarity be5In their experiment some extra linguistic features was also added for better performance. System Dev Test1 Test2 (Feng et al., 2015) 65.4 65.3 61.0 (Santos et al., 2016) 66.8 67.8 60.3 GRU 59.4 53.2 58.1 OARNN 65.4 66.1 60.2 IARNN-word 67.2125 67.0651 61.5896 IARNN-Occam(word) 69.9130 69.5923 63.7317 IARNN-context 67.1025 66.7211 63.0656 IARNN-Occam(context) 69.1125 68.8651 65.1396 IARNN-Gate 69.9812 70.1128 62.7965 Table 3: Experiment result in InsuranceQA, (Feng et al., 2015) is a CNN architecture without attention mechanism. System MAP MRR (Wang and Nyberg, 2015) † 0.7134 0.7913 (Wang and Ittycheriah, 2015) † 0.7460 0.8200 (Santos et al., 2016) † 0.7530 0.8511 GRU 0.6487 0.6991 OARNN 0.6887 0.7491 IARNN-word 0.7098 0.7757 IARNN-Occam(word) 0.7162 0.7916 IARNN-context 0.7232 0.8069 IARNN-Occam(context) 0.7272 0.8191 IARNN-Gate 0.7369 0.8208 Table 4: Result of different systems in TrecQA.(Wang and Ittycheriah, 2015) propose a question similarity model to extract features from word alignment between two questions which is suitable to FAQ based QA. It needs to mention that the system marked with † are learned on TREC-QA original full training data. tween two representations: GESD(x, y) = 1 1 + ||x −y||× 1 1 + exp(−γ(xyT + c)) (10) which shows advantage over cosine similarity in experiments. We report accuracy instead of MAP/MRR because one question only has one right answers in InsuranceQA. The result is shown in Table 3. TREC-QA was created by Wang et al.(2007) based on Text REtrieval Conference (TREC) QA track (8-13) data. The size of hidden variable was set to 80, M was set to 0.1. This dataset is comparatively small so we set word embedding vector 1294 Q: how old was monica lewinsky during the affair ? Monica Samille Lewinsky ( born July 23 , 1973 ) is an American woman with whom United States President Bill Clinton admitted to having had an `` improper relationship '' while she worked at the White House in 1995 and 1996 . Monica Samille Lewinsky ( born July 23 , 1973 ) is an American woman with whom United States President Bill Clinton admitted to having had an `` improper relationship '' while she worked at the White House in 1995 and 1996 . OARNN: IARNN-CONTEXT: Figure 7: An example demonstrates the advantage of IARNN in capturing the informed part of a sentence compared with OARNN. The effects of relativistic self focusing and preformed plasma channel guiding are analyzed. Q: what did gurgen askaryan research when he entered the moscow state university? IARNN-CONTEXT: IARNN-WORD: Answer: Figure 8: An example illustrates the IARNN-CONTEXT could attend the consecutive words in a sentence. size to 50 and update it during training. It needs to mention that we do not use the original TREC-QA training data but the smaller one which has been edited by human. The result is shown in Table 4. 6 Result and Analysis We can see from the result tables that the attention based RNN models achieve better results than the non-attention RNN models (GRU). OARNN and IARNN beat the non-attentive GRU in every datasets by a large margin, which proves the importance of attention mechanism in representing answer sentence in AS. For the non-attentive models, the fixed width of the hidden vectors is a bottleneck for interactive information flow, so the informative part of the question could only propagate through the similarity score which is blurred for the answer representation to be properly learned. But in attention based models, the question attention information is introduced to influence the answer sentence representation explicitly, in this way we can improve sentence representation for the specific target (or topic (Ghosh et al., 2016)). The inner attention RNN models outperform outer attention model in three datasets, this is corresponds to our intuition that the bias attention problem in OARNN may cause a biasd sentence representation. An example of the attention heatmap is shown in Figure7. To answer the question, we should focus on “born July 23 , 1973” which is located at the beginning of the sentence. But in OARNN, the attention is biases towards the last few last words in the answer. In IARNNCONTEXT, the attention is paid to the relevant part and thus results in a more relevant representation. The attention with context information could also improves the result, we can see that IARNNCONTEXT and IARNN-GATE outperform IARNN-WORD in three experiments. IARNNWORD may ignore the importance of some words because it attends answer word by word, for example in Figure8, the specific word self or focusing may not be related to the question by itself, but their combination and the previous word relativistic is very informative for answering the question. In IARNN-CONTEXT we add attention information dynamically in RNN process, thus it could capture the relationship between word and its context. In general, we can see from table3-5 that the IARNN-GATE outperforms IARNN-CONTEXT and IARNN-WORD. In IARNN-WORD and IARNN-CONTEXT, the attention is added to impact each word representation, but the recurrent process of updating RNN hidden state representations are not influenced. IARNN-GATE embeds the attention into RNN inner activation, the attentive activation gate are more capable of controlling the attention information in RNN. This enlights an important future work: we could add attention information as an individual activation gate, and use this additional gate to control attention information flow in RNN. The regulation of the attention weights (Occam’s attention) could also improve the representation. We also conduct an experiment on WikiQA (training process) to measure the Occam’s attention regulation on different type of questions. We use rules to classify question into 6 types 1295 who why how when where what 0.131 0.065 0.052 0.103 0.118 0.089 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 who why how when where what Figure 9: The Occam’s attention regulation on different types of question. (i.e. who,why,how,when,where,what), and each of them has the same number of samples to avoid data imbalance. We report the Occam’m regulation (ni p in Equation.8) in Figure 9. As we can see from the radar graph, who and where are regulized severely compared with other types of question, this is correspond to their comparetively less information in the answer candidate to answer the question. This emphasize that different types question should impose different amount of regulation on its candidate answers. The experiment result on three AS datasets shows that the improvement of Occam’s attention is significant in WikiQA and insuranceQA. Because most of the sentence are relatively long in these two datasets, and the longer the sentence, the more noise it may contain, so we should punish the summation of the attention weights to remove some irrelevant parts. Our question-specific Occam’s attention punishes the summation of attention and thus achieves a better result for both IARNN-WORD and IARNNCONTEXT. 7 Conclusion and Future Work In this work we present some variants of traditional attention-based RNN models with GRU. The key idea is attention before representation. We analyze the deficiency of traditional outer attention-based RNN models qualitatively and quantitatively. We propose three models where attention is embedded into representation process. Occam’s Razor is further implemented to this attention for better representation. Our results on answer selection demonstrate that the inner attention outperforms the outer attention in RNN. Our models can be further extended to other NLP tasks such as recognizing textual entailments where attention mechanism is important for sentence representation. In the future we plan to apply our inner-attention intuition to other neural networks such as CNN or multi-layer perceptron. Acknowledgments The work was supported by the Natural Science Foundation of China (No.61533018), the National High Technology Development 863 Program of China (No.2015AA015405) and the National Natural Science Foundation of China (No.61272332). And this research work was also supported by Google through focused research awards program. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Minwei Feng, Bing Xiang, Michael R Glass, Lidan Wang, and Bowen Zhou. 2015. Applying deep learning to answer selection: A study and an open task. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual lstm (clstm) models for large scale nlp tasks. arXiv preprint arXiv:1602.06291. Michael Heilman and Noah A Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1011–1019. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684– 1692. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342–2350. Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285. 1296 Tom´aˇs Mikolov, Stefan Kombrink, Luk´aˇs Burget, Jan Honza ˇCernock`y, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 5528–5531. IEEE. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. 2014. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pages 2204–2212. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1310–1318. Jonathan Raiman and Szymon Sidor. 2015. Occam’s gates. arXiv preprint arXiv:1506.08251. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664. Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. arXiv preprint arXiv:1602.03609. Aliaksei Severyn and Alessandro Moschitti. 2013. Automatic feature engineering for answer selection and extraction. In EMNLP, pages 458–467. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ming Tan, Bing Xiang, and Bowen Zhou. 2015. Lstmbased deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108. Zhiguo Wang and Abraham Ittycheriah. 2015. Faqbased question answering via word alignment. arXiv preprint arXiv:1507.02628. Di Wang and Eric Nyberg. 2015. A long shortterm memory model for answer sentence selection in question answering. ACL, July. Mengqiu Wang, Noah A Smith, and Teruko Mitamura. 2007. What is the jeopardy model? a quasisynchronous grammar for qa. In EMNLP-CoNLL, volume 7, pages 22–32. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Citeseer. Xuchen Yao, Benjamin Van Durme, Chris CallisonBurch, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In HLTNAACL, pages 858–867. Citeseer. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of ACL. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Yin Zheng, Richard S Zemel, Yu-Jin Zhang, and Hugo Larochelle. 2015. A neural autoregressive approach to attention-based recognition. International Journal of Computer Vision, 113(1):67–79. 1297
2016
122
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1298–1307, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Relation Classification via Multi-Level Attention CNNs Linlin Wang1∗, Zhu Cao1∗, Gerard de Melo2, Zhiyuan Liu3† 1Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China 2 Department of Computer Science, Rutgers University, Piscataway, NJ, USA 3State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, China {ll-wang13,cao-z13}@mails.tsinghua.edu.cn, [email protected] Abstract Relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text. We propose a novel convolutional neural network architecture for this task, relying on two levels of attention in order to better discern patterns in heterogeneous contexts. This architecture enables endto-end learning from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments show that our model outperforms previous state-of-the-art methods, including those relying on much richer forms of prior knowledge. 1 Introduction Relation classification is the task of identifying the semantic relation holding between two nominal entities in text. It is a crucial component in natural language processing systems that need to mine explicit facts from text, e.g. for various information extraction applications as well as for question answering and knowledge base completion (Tandon et al., 2011; Chen et al., 2015). For instance, given the example input “Fizzy [drinks] and meat cause heart disease and [diabetes].” with annotated target entity mentions e1 = “drinks” and e2 = “diabetes”, the goal would be to automatically recognize that this sentence expresses a causeeffect relationship between e1 and e2, for which we use the notation Cause-Effect(e1,e2). Accurate relation classification facilitates precise sentence interpretations, discourse processing, and higherlevel NLP tasks (Hendrickx et al., 2010). Thus, ∗Equal contribution. † Corresponding author. Email: [email protected] relation classification has attracted considerable attention from researchers over the course of the past decades (Zhang, 2004; Qian et al., 2009; Rink and Harabagiu, 2010). In the example given above, the verb corresponds quite closely to the desired target relation. However, in the wild, we encounter a multitude of different ways of expressing the same kind of relationship. This challenging variability can be lexical, syntactic, or even pragmatic in nature. An effective solution needs to be able to account for useful semantic and syntactic features not only for the meanings of the target entities at the lexical level, but also for their immediate context and for the overall sentence structure. Thus, it is not surprising that numerous featureand kernel-based approaches have been proposed, many of which rely on a full-fledged NLP stack, including POS tagging, morphological analysis, dependency parsing, and occasionally semantic analysis, as well as on knowledge resources to capture lexical and semantic features (Kambhatla, 2004; Zhou et al., 2005; Suchanek et al., 2006; Qian et al., 2008; Mooney and Bunescu, 2005; Bunescu and Mooney, 2005). In recent years, we have seen a move towards deep architectures that are capable of learning relevant representations and features without extensive manual feature engineering or use of external resources. A number of convolutional neural network (CNN), recurrent neural network (RNN), and other neural architectures have been proposed for relation classification (Zeng et al., 2014; dos Santos et al., 2015; Xu et al., 2015b). Still, these models often fail to identify critical cues, and many of them still require an external dependency parser. We propose a novel CNN architecture that addresses some of the shortcomings of previous approaches. Our key contributions are as follows: 1. Our CNN architecture relies on a novel multi1298 level attention mechanism to capture both entity-specific attention (primary attention at the input level, with respect to the target entities) and relation-specific pooling attention (secondary attention with respect to the target relations). This allows it to detect more subtle cues despite the heterogeneous structure of the input sentences, enabling it to automatically learn which parts are relevant for a given classification. 2. We introduce a novel pair-wise margin-based objective function that proves superior to standard loss functions. 3. We obtain the new state-of-the-art results for relation classification with an F1 score of 88.0% on the SemEval 2010 Task 8 dataset, outperforming methods relying on significantly richer prior knowledge. 2 Related Work Apart from a few unsupervised clustering methods (Hasegawa et al., 2004; Chen et al., 2005), the majority of work on relation classification has been supervised, typically cast as a standard multiclass or multi-label classification task. Traditional feature-based methods rely on a set of features computed from the output of an explicit linguistic preprocessing step (Kambhatla, 2004; Zhou et al., 2005; Boschee et al., 2005; Suchanek et al., 2006; Chan and Roth, 2010; Nguyen and Grishman, 2014), while kernel-based methods make use of convolution tree kernels (Qian et al., 2008), subsequence kernels (Mooney and Bunescu, 2005), or dependency tree kernels (Bunescu and Mooney, 2005). These methods thus all depend either on carefully handcrafted features, often chosen on a trial-and-error basis, or on elaborately designed kernels, which in turn are often derived from other pre-trained NLP tools or lexical and semantic resources. Although such approaches can benefit from the external NLP tools to discover the discrete structure of a sentence, syntactic parsing is error-prone and relying on its success may also impede performance (Bach and Badaskar, 2007). Further downsides include their limited lexical generalization abilities for unseen words and their lack of robustness when applied to new domains, genres, or languages. In recent years, deep neural networks have shown promising results. The Recursive MatrixVector Model (MV-RNN) by Socher et al. (2012) sought to capture the compositional aspects of the sentence semantics by exploiting syntactic trees. Zeng et al. (2014) proposed a deep convolutional neural network with softmax classification, extracting lexical and sentence level features. However, these approaches still depend on additional features from lexical resources and NLP toolkits. Yu et al. (2014) proposed the Factor-based Compositional Embedding Model, which uses syntactic dependency trees together with sentence-level embeddings. In addition to dos Santos et al. (2015), who proposed the Ranking CNN (CR-CNN) model with a class embedding matrix, Miwa and Bansal (2016) similarly observed that LSTM-based RNNs are outperformed by models using CNNs, due to limited linguistic structure captured in the network architecture. Some more elaborate variants have been proposed to address this, including bidirectional LSTMs (Zhang et al., 2015), deep recurrent neural networks (Xu et al., 2016), and bidirectional treestructured LSTM-RNNs (Miwa and Bansal, 2016). Several recent works also reintroduce a dependency tree-based design, e.g., RNNs operating on syntactic trees (Hashimoto et al., 2013), shortest dependency path-based CNNs (Xu et al., 2015a), and the SDP-LSTM model (Xu et al., 2015b). Finally, Nguyen and Grishman (2015) train both CNNs and RNNs and variously aggregate their outputs using voting, stacking, or log-linear modeling (Nguyen and Grishman, 2015). Although these recent models achieve solid results, ideally, we would want a simple yet effective architecture that does not require dependency parsing or training multiple models. Our experiments in Section 4 demonstrate that we can indeed achieve this, while also obtaining substantial improvements in terms of the obtained F1 scores. 3 The Proposed Model Given a sentence S with a labeled pair of entity mentions e1 and e2 (as in our example from Section 1), relation classification is the task of identifying the semantic relation holding between e1 and e2 among a set of candidate relation types (Hendrickx et al., 2010). Since the only input is a raw sentence with two marked mentions, it is non-trivial to obtain all the lexical, semantic and syntactic cues necessary to make an accurate prediction. To this end, we propose a novel multi-level attention-based convolution neural network model. A schematic overview of our architecture is given 1299 Notation Definition Notation Definition wM i Final word emb. zi Context emb. Wf Conv. weight Bf Conv. bias wO Network output W L Relation emb. Aj Input att. Ap Pooling att. G Correlation matrix Table 1: Overview of main notation. in Figure 1. The input sentence is first encoded using word vector representations, exploiting the context and a positional encoding to better capture the word order. A primary attention mechanism, based on diagonal matrices is used to capture the relevance of words with respect to the target entities. To the resulting output matrix, one then applies a convolution operation in order to capture contextual information such as relevant n-grams, followed by max-pooling. A secondary attention pooling layer is used to determine the most useful convolved features for relation classification from the output based on an attention pooling matrix. The remainder of this section will provide further details about this architecture. Table 1 provides an overview of the notation we will use for this. The final output is given by a new objective function, described below. 3.1 Classification Objective We begin with top-down design considerations for the relation classification architecture. For a given sentence S, our network will ultimately output some wO. For every output relation y ∈Y, we assume there is a corresponding output embedding W L y , which will automatically be learnt by the network (dos Santos et al., 2015). We propose a novel distance function δθ(S) that measures the proximity of the predicted network output wO to a candidate relation y as follows. δθ(S, y) = wO |wO| −W L y (1) using the L2 norm (note that W L y are already normalized). Based on this distance function, we design a margin-based pairwise loss function L as L =  δθ(S, y) + (1 −δθ(S, ˆy−))  + β∥θ∥2 =  1 + wO |wO| −W L y − wO |wO| −W L ˆy−  + β∥θ∥2, (2) where 1 is the margin, β is a parameter, δθ(S, y) is the distance between the predicted label embedding W L and the ground truth label y and δθ(S, ˆy−) refers to the distance between wO and a selected incorrect relation label ˆy−. The latter is chosen as the one with the highest score among all incorrect classes (Weston et al., 2011; dos Santos et al., 2015), i.e. ˆy−= argmax y′∈Y,y′̸=y δ(S, y′). (3) This margin-based objective has the advantage of a strong interpretability and effectiveness compared with empirical loss functions such as the ranking loss function in the CR-CNN approach by dos Santos et al. (2015). Based on a distance function motived by word analogies (Mikolov et al., 2013b), we minimize the gap between predicted outputs and ground-truth labels, while maximizing the distance with the selected incorrect class. By minimizing this pairwise loss function iteratively (see Section 3.5), δθ(S, y) are encouraged to decrease, while δθ(S, ˆy−) increase. 3.2 Input Representation Given a sentence S = (w1, w2, ..., wn) with marked entity mentions e1(=wp) and e2(=wt), (p, t ∈[1, n], p ̸= t), we first transform every word into a real-valued vector to provide lexicalsemantic features. Given a word embedding matrix WV of dimensionality dw × |V | , where V is the input vocabulary and dw is the word vector dimensionality (a hyper-parameter), we map every wi to a column vector wd i ∈Rdw. To additionally capture information about the relationship to the target entities, we incorporate word position embeddings (WPE) to reflect the relative distances between the i-th word to the two marked entity mentions (Zeng et al., 2014; dos Santos et al., 2015). For the given sentence in Fig. 1, the relative distances of word “and” to entity e1 “drinks” and e2 “diabetes” are −1 and 6, respectively. Every relative distance is mapped to a randomly initialized position vector in Rdp, where dp is a hyper-parameter. For a given word i, we obtain two position vectors wp i,1 and wp i,2, with regard to entities e1 and e2, respectively. The overall word embedding for the i-th word is wM i = [(wd i )⊺, (wp i,1)⊺, (wp i,2)⊺]⊺. Using a sliding window of size k centered around the i-th word, we encode k successive 1300 Pooling att. matrix fizzy Input S with marked entities: Fizzy [drinks] and meat cause heart disease and [diabetes] . Input att. matrix Input att. matrix drinks and meat cause heart disease and diabetes fizzy drinks and meat cause heart disease and diabetes Entity diabetes Lookup table & , Lookup table x x Convolution layer output Convolution with kernel Window operation Max x x x Entity drinks Attention based convolution input Figure 1: Schematic overview of our Multi-Level Attention Convolutional Neural Networks words into a vector zi ∈R(dw+2dp)k to incorporate contextual information as zi = [(wM i−(k−1)/2)⊺, ..., (wM i+(k−1)/2)⊺]⊺ (4) An extra padding token is repeated multiple times for well-definedness at the beginning and end of the input. 3.3 Input Attention Mechanism While position-based encodings are useful, we conjecture that they do not suffice to fully capture the relationships of specific words with the target entities and the influence that they may bear on the target relations of interest. We design our model so as to automatically identify the parts of the input sentence that are relevant for relation classification. Attention mechanisms have successfully been applied to sequence-to-sequence learning tasks such as machine translation (Bahdanau et al., 2015; Meng et al., 2015) and abstractive sentence summarization (Rush et al., 2015), as well as to tasks such as modeling sentence pairs (Yin et al., 2015) and question answering (Santos et al., 2016). To date, these mechanisms have generally been used to allow for an alignment of the input and output sequence, e.g. the source and target sentence in machine translation, or for an alignment between two input sentences as in sentence similarity scoring and question answering. fizzy drinks and meat cause heart disease and diabetes Entity 1: drinks Entity 2: diabetes Lookup table Given Text S: ... ... ... ... ... ... ... ... ... Word & Position embedding Window operation Diagonal Input att. matrix (S,drinks) Diagonal Input att. matrix (S,diabetes) ... ... ... ... Figure 2: Input and Primary Attention In our work, we apply the idea of modeling attention to a rather different kind of scenario involving heterogeneous objects, namely a sentence and two entities. With this, we seek to give our model the capability to determine which parts of the sentence are most influential with respect to the two entities of interest. Consider that in a long sentence with multiple clauses, perhaps only a single verb or noun might stand in a relevant relationship with a given target entity. As depicted in Fig. 2, the input representation layer is used in conjunction with diagonal attention matrices and convolutional input composition. Contextual Relevance Matrices. Consider the example in Fig. 1, where the non-entity word “cause” is of particular significance in determining the relation. Fortunately, we can exploit the fact 1301 that there is a salient connection between the words “cause” and “diabetes” also in terms of corpus cooccurrences. We introduce two diagonal attention matrices Aj with values Aj i,i = f(ej, wi) to characterize the strength of contextual correlations and connections between entity mention ej and word wi. The scoring function f is computed as the inner product between the respective embeddings of word wi and entity ej, and is parametrized into the network and updated during the training process. Given the Aj matrices, we define αj i = exp(Aj i,i) Pn i′=1 exp(Aj i′,i′) , (5) to quantify the relative degree of relevance of the ith word with respect to the j-th entity (j ∈{1, 2}). Input Attention Composition. Next, we take the two relevance factors α1 i and α2 i and model their joint impact for recognizing the relation via simple averaging as ri = zi α1 i + α2 i 2 . (6) Apart from this default choice, we also evaluate two additional variants. The first (Variant-1) concatenates the word vectors as ri = [(zi α1 i )⊺, (zi α2 i )⊺]⊺, (7) to obtain an information-enriched input attention component for this specific word, which contains the relation relevance to both entity 1 and entity 2. The second variant (Variant-2) interprets relations as mappings between two entities, and combines the two entity-specific weights as ri = zi α1 i −α2 i 2 , (8) to capture the relation between them. Based on these ri, the final output of the input attention component is the matrix R = [r1, r2, . . . , rn], where n is the sentence length. 3.4 Convolutional Max-Pooling with Secondary Attention After this operation, we apply convolutional maxpooling with another secondary attention model to extract more abstract higher-level features from the previous layer’s output matrix R. Convolution Layer. A convolutional layer may, for instance, learn to recognize short phrases such as trigrams. Given our newly generated input attention-based representation R, we accordingly apply a filter of size dc as a weight matrix Wf of size dc × k(dw + 2dp). Then we add a linear bias Bf, followed by a non-linear hyperbolic tangent transformation to represent features as follows: R∗= tanh(WfR + Bf). (9) Attention-Based Pooling. Instead of regular pooling, we rely on an attention-based pooling strategy to determine the importance of individual windows in R∗, as encoded by the convolutional kernel. Some of these windows could represent meaningful n-grams in the input. The goal here is to select those parts of R∗that are relevant with respect to our objective from Section 3.1, which essentially calls for a relation encoding process, while neglecting sentence parts that are irrelevant for this process. We proceed by first creating a correlation modeling matrix G that captures pertinent connections between the convolved context windows from the sentence and the relation class embedding W L introduced earlier in Section 3.1: G = R∗⊺U W L, (10) where U is a weighting matrix learnt by the network. Then we adopt a softmax function to deal with this correlation modeling matrix G to obtain an attention pooling matrix Ap as Ap i,j = exp(Gi,j) Pn i′=1 exp(Gi′,j), (11) where Gi,j is the (i, j)-th entry of G and Ap i,j is the (i, j)-th entry of Ap. Finally, we multiply this attention pooling matrix with the convolved output R∗to highlight important individual phrase-level components, and apply a max operation to select the most salient one (Yin et al., 2015; Santos et al., 2016) for a given dimension of the output. More precisely, we obtain the output representation wO as follows in Eq. (12): wO i = max j (R∗Ap)i,j, (12) where wO i is the i-th entry of wO and (R∗Ap)i,j is the (i, j)-th entry of R∗Ap. 1302 3.5 Training Procedure We rely on stochastic gradient descent (SGD) to update the parameters with respect to the loss function in Eq. (2) as follows: θ ′ = θ + λd(P|S| i=1 [δθ(Si, y) + (1 −δθ(Si, ˆy− i ))]) dθ + λ1 d(β||θ||2) dθ (13) where λ and λ1 are learning rates, and incorporating the β parameter from Eq. (2). 4 Experiments 4.1 Experimental Setup Dataset and Metric. We conduct our experiments on the commonly used SemEval-2010 Task 8 dataset (Hendrickx et al., 2010), which contains 10,717 sentences for nine types of annotated relations, together with an additional “Other” type. The nine types are: Cause-Effect, Component-Whole, Content-Container, EntityDestination, Entity-Origin, Instrument-Agency, Member-Collection, Message-Topic, and ProductProducer, while the relation type “Other” indicates that the relation expressed in the sentence is not among the nine types. However, for each of the aforementioned relation types, the two entities can also appear in inverse order, which implies that the sentence needs to be regarded as expressing a different relation, namely the respective inverse one. For example, Cause-Effect(e1,e2) and CauseEffect(e2,e1) can be considered two distinct relations, so the total number |Y| of relation types is 19. The SemEval-2010 Task 8 dataset consists of a training set of 8,000 examples, and a test set with the remaining examples. We evaluate the models using the official scorer in terms of the Macro-F1 score over the nine relation pairs (excluding Other). Settings. We use the word2vec skip-gram model (Mikolov et al., 2013a) to learn initial word representations on Wikipedia. Other matrices are initialized with random values following a Gaussian distribution. We apply a cross-validation procedure on the training data to select suitable hyperparameters. The choices generated by this process are given in Table 2. 4.2 Experimental Results Table 3 provides a detailed comparison of our Multi-Level Attention CNN model with previous Parameter Parameter Name Value dp Word Pos. Emb. Size 25 dc Conv. Size 1000 k Word Window Size 3 λ Learning rate 0.03 λ1 Learning rate 0.0001 Table 2: Hyperparameters. approaches. We observe that our novel attentionbased architecture achieves new state-of-the-art results on this relation classification dataset. AttInput-CNN relies only on the primal attention at the input level, performing standard max-pooling after the convolution layer to generate the network output wO, in which the new objective function is utilized. With Att-Input-CNN, we achieve an F1-score of 87.5%, thus already outperforming not only the original winner of the SemEval task, an SVM-based approach (82.2%), but also the wellknown CR-CNN model (84.1%) with a relative improvement of 4.04%, and the newly released DRNNs (85.8%) with a relative improvement of 2.0%, although the latter approach depends on the Stanford parser to obtain dependency parse information. Our full dual attention model Att-PoolingCNN achieves an even more favorable F1-score of 88%. Table 4 provides the experimental results for the two variants of the model given by Eqs. (7) and (8) in Section 3.3. Our main model outperforms the other variants on this dataset, although the variants may still prove useful when applied to other tasks. To better quantify the contribution of the different components of our model, we also conduct an ablation study evaluating several simplified models. The first simplification is to use our model without the input attention mechanism but with the pooling attention layer. The second removes both attention mechanisms. The third removes both forms of attention and additionally uses a regular objective function based on the inner product s = r · w for a sentence representation r and relation class embedding w. We observe that all three of our components lead to noticeable improvements over these baselines. 4.3 Detailed Analysis Primary Attention. To inspect the inner workings of our model, we considered the primary attention matrices of our multi-level attention model 1303 Classifier F1 Manually Engineered Methods SVM (Rink and Harabagiu, 2010) 82.2 Dependency Methods RNN (Socher et al., 2012) 77.6 MVRNN (Socher et al., 2012) 82.4 FCM (Yu et al., 2014) 83.0 Hybrid FCM (Yu et al., 2014) 83.4 SDP-LSTM (Xu et al., 2015b) 83.7 DRNNs (Xu et al., 2016) 85.8 SPTree (Miwa and Bansal, 2016) 84.5 End-To-End Methods CNN+ Softmax (Zeng et al., 2014) 82.7 CR-CNN (dos Santos et al., 2015) 84.1 DepNN (Liu et al., 2015) 83.6 depLCNN+NS (Xu et al., 2015a) 85.6 STACK-FORWARD∗ 83.4 VOTE-BIDIRECT∗ 84.1 VOTE-BACKWARD∗ 84.1 Our Architectures Att-Input-CNN 87.5 Att-Pooling-CNN 88.0 Table 3: Comparison with results published in the literature, where ‘∗’ refers to models from Nguyen and Grishman (2015). for the following randomly selected sentence from the test set: The disgusting scene was retaliation against her brother Philip who rents the [room]e1 inside this apartment [house]e2 on Lombard street. Fig. 3 plots the word-level attention values for the input attention layer to act as an example, using the calculated attention values for every individual word in the sentence. We find the word “inside” was assigned the highest attention value, while words such as “room” and “house” also are deemed important. This appears sensible in light of the ground-truth labeling as a ComponentWhole(e1,e2) relationship. Additionally, we observe that words such as “this”, which are rather irrelevant with respect to the target relationship, indeed have significantly lower attention scores. Most Significant Features for Relations. Table 5 lists the top-ranked trigrams for each relation class y in terms of their contribution to the score for determining the relation classification. Recall the definition of δθ(x, y) in Eq. (1). In the network, we trace back the trigram that contributed most to Classifier F1 Att-Input-CNN (Main) 87.5 Att-Input-CNN (Variant-1) 87.2 Att-Input-CNN (Variant-2) 87.3 Att-Pooling-CNN (regular) 88.0 – w/o input attention 86.6 – w/o any attention 86.1 – w/o any attention, w/o δ-objective 84.1 Table 4: Comparison between the main model and variants as well as simplified models. the disgusting scene was retaliation against her brother philip who rents the room inside this apartment house on lombard street 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Figure 3: Input Attention Visualization. The value of the y-coordinate is computed as 100 ∗(At i − mini∈{1,...,n} At i), where At i stands for the overall attention weight assigned to the word i. the correct classification in terms of δθ(Si, y) for each sentence Si. We then rank all such trigrams in the sentences in the test set according to their total contribution and list the top-ranked trigrams.∗In Table 5, we see that these are indeed very informative for deducing the relation. For example, the top trigram for Cause-Effect(e2,e1) is “are caused by”, which strongly implies that the first entity is an effect caused by the latter. Similarly, the top trigram for Entity-Origin(e1,e2) is “from the e2”, which suggests that e2 could be an original location, at which entity e1 may have been located. Error Analysis. Further, we examined some of the misclassifications produced by our model. The following is a typical example of a wrongly classified sentence: ∗For Entity-Destination(e2,e1), there was only one occurrence in the test set. 1304 Relation (e1, e2) (e2, e1) e1 caused a, caused a e2, e2 caused by, e2 from e1, Cause-Effect e1 resulted in, the cause of, is caused by, are caused by, had caused the, poverty cause e2 was caused by, been caused by Component-Whole e1 of the, of a e2, of the e2, with its e2, e1 consists of, in the e2, part of the e1 has a, e1 comprises e2 Content-Container in a e2, was hidden in, e1 with e2, filled with e2, inside a e2, was contained in e1 contained a, full of e2, Entity-Destination e1 into the, e1 into a, had thrown into was put inside, in a e2 Entity-Origin from this e2, is derived from, e1 e2 is, the e1 e2, from the e2, away from the for e1 e2, the source of Instrument-Agency for the e2, is used by, by a e2, e1 use e2, with a e2, with the e2, a e1 e2 by using e2 Member-Collection of the e2, in the e2, a e1 of, e1 of various, a member of, from the e2 e1 of e2, the e1 of Message-Topic on the e2, e1 asserts the, the e1 of, described in the, e1 points out, e1 is the the topic for, in the e2 Product-Producer e1 made by, made by e2, has constructed a, came up with, from the e2, by the e2 has drawn up, e1 who created Table 5: Most representative trigrams for different relations. A [film]e1 revolves around a [cadaver]e2 who seems to bring misfortune on those who come in contact with it. This sentence is wrongly classified as belonging to the “Other” category, while the ground-truth label is Message-Topic(e1,e2). The phrase “revolves around” does not appear in the training data, and moreover is used metaphorically, rather than in its original sense of turning around, making it difficult for the model to recognize the semantic connection. Another common issue stems from sentences of the form “. . . e1 e2 . . . ”, such as the following ones: The size of a [tree]e1 [crown]e2 is strongly .. . Organic [sesame]e1 [oil]e2 has an ... Before heading down the [phone]e1 [operator]e2 career . . . These belong to three different relation classes, Component-Whole(e2,e1), Entity-Origin(e2,e1), and Instrument-Agency(e1,e2), respectively, which are only implicit in the text, and the context is not particularly helpful. More informative word embeddings could conceivably help in such cases. Convergence. Finally, we examine the convergence behavior of our two main methods. We plot the performance of each iteration in the Att-InputCNN and Att-Pooling-CNN models in Fig. 4. It can be seen that Att-Input-CNN quite smoothly converges to its final F1 score, while for the AttPooling-CNN model, which includes an additional attention layer, the joint effect of these two attention layer induces stronger back-propagation effects. On the one hand, this leads to a seesaw phenomenon in the result curve, but on the other hand it enables us to obtain better-suited models with slightly higher F1 scores. 0 5 10 15 20 25 30 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Iteration F1 score Att−Pooling−CNN Att−Input−CNN Figure 4: Training Progress of Att-Input-CNN and Att-Pooling-CNN across iterations. 5 Conclusion We have presented a CNN architecture with a novel objective and a new form of attention mechanism that is applied at two different levels. Our results show that this simple but effective model is able to outperform previous work relying on substantially richer prior knowledge in the form of structured models and NLP resources. We expect this sort of architecture to be of interest also beyond the specific task of relation classification, which we intend to explore in future work. 1305 Acknowledgments The research at IIIS is supported by China 973 Program Grants 2011CBA00300, 2011CBA00301, and NSFC Grants 61033001, 61361136003, 61550110504. Prof. Liu is supported by the China 973 Program Grant 2014CB340501 and NSFC Grants 61572273 and 61532010. References Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Online at http://www.cs.cmu.edu/%7Enbach/papers/ A-survey-on-Relation-Extraction.pdf. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Elizabeth Boschee, Ralph Weischedel, and Alex Zamanian. 2005. Automatic information extraction. In Proceedings of the 2005 International Conference on Intelligence Analysis, McLean, VA, pages 2–4. Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 724–731. Association for Computational Linguistics. Yee Seng Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 152–160. Association for Computational Linguistics. Jinxiu Chen, Donghong Ji, Chew Lim Tan, and Zhengyu Niu. 2005. Unsupervised feature selection for relation extraction. In Proceedings of IJCNLP. Jiaqiang Chen, Niket Tandon, and Gerard de Melo. 2015. Neural word representations from large-scale commonsense knowledge. In Proceedings of WI 2015. Cıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 626–634. Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, page 415. Association for Computational Linguistics. Kazuma Hashimoto, Makoto Miwa, Yoshimasa Tsuruoka, and Takashi Chikayama. 2013. Simple customization of recursive neural networks for semantic relation classification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1372–1376, Seattle, Washington, USA, October. Association for Computational Linguistics. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38. Association for Computational Linguistics. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the 42nd Annual Meeting of the Association for Computation Linguistics, page 22. Association for Computational Linguistics. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 285–290. Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. 2015. Encoding source language with convolutional neural network for machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7the International Joint Conference on Natural Language Processing, pages 20–30. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In ICLR Workshop. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In HLT-NAACL, pages 746– 751. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. arXiv preprint arXiv:1601.00770. Raymond J Mooney and Razvan C Bunescu. 2005. Subsequence kernels for relation extraction. In Advances in Neural Information Processing Systems, pages 171–178. Thien Huu Nguyen and Ralph Grishman. 2014. Employing word representations and regularization for domain adaptation of relation extraction. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 68–74. 1306 Thien Huu Nguyen and Ralph Grishman. 2015. Combining neural networks and log-linear models to improve relation extraction. arXiv preprint arXiv:1511.05926. Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proceedings of the 22nd International Conference on Computational Linguistics, volume 1, pages 697–704. Association for Computational Linguistics. Longhua Qian, Guodong Zhou, Fang Kong, and Qiaoming Zhu. 2009. Semi-supervised learning for semantic relation classification using stratified sampling strategy. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Bryan Rink and Sanda Harabagiu. 2010. Utd: Classifying semantic relations by combining lexical and semantic resources. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 256–259. Association for Computational Linguistics. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of EMNLP 2015. Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. arXiv preprint arXiv:1602.03609. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Fabian M Suchanek, Georgiana Ifrim, and Gerhard Weikum. 2006. Combining linguistic and statistical analysis to extract relations from web documents. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 712–717. ACM. Niket Tandon, Gerard de Melo, and Gerhard Weikum. 2011. Deriving a Web-scale common sense fact database. In Proceedings of the Twenty-fifth AAAI Conference on Artificial Intelligence (AAAI 2011), pages 152–157, Palo Alto, CA, USA. AAAI Press. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of IJCAI, volume 11, pages 2764–2770. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classification via convolutional neural networks with simple negative sampling. Proceedings of EMNLP 2015. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of Conference on Empirical Methods in Natural Language Processing (to appear). Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved relation classification by deep recurrent neural networks with data augmentation. arXiv preprint arXiv:1601.03651. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2015. ABCNN: attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193. Mo Yu, Matthew Gormley, and Mark Dredze. 2014. Factor-based compositional embedding models. In NIPS Workshop on Learning Semantics. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In COLING, pages 2335–2344. Shu Zhang, Dequan Zheng, Xinchen Hu, and Ming Yang. 2015. Bidirectional long short-term memory networks for relation classification. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation pages, pages 73–78. Zhu Zhang. 2004. Weakly-supervised relation classification for information extraction. In In Proceedings of the Thirteenth ACM International Conference on Information and Knowledge Management. Guodong Zhou, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 427–434. Association for Computational Linguistics. 1307
2016
123
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1308–1318, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Knowledge Base Completion via Coupled Path Ranking Quan Wang†, Jing Liu‡, Yuanfei Luo†, Bin Wang†, Chin-Yew Lin‡ †Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China {wangquan,luoyuanfei,wangbin}@iie.ac.cn ‡Microsoft Research, Beijing 100080, China {liudani,cyl}@microsoft.com Abstract Knowledge bases (KBs) are often greatly incomplete, necessitating a demand for KB completion. The path ranking algorithm (PRA) is one of the most promising approaches to this task. Previous work on PRA usually follows a single-task learning paradigm, building a prediction model for each relation independently with its own training data. It ignores meaningful associations among certain relations, and might not get enough training data for less frequent relations. This paper proposes a novel multi-task learning framework for PRA, referred to as coupled PRA (CPRA). It first devises an agglomerative clustering strategy to automatically discover relations that are highly correlated to each other, and then employs a multi-task learning strategy to effectively couple the prediction of such relations. As such, CPRA takes into account relation association and enables implicit data sharing among them. We empirically evaluate CPRA on benchmark data created from Freebase. Experimental results show that CPRA can effectively identify coherent clusters in which relations are highly correlated. By further coupling such relations, CPRA significantly outperforms PRA, in terms of both predictive accuracy and model interpretability. 1 Introduction Knowledge bases (KBs) like Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2014), and NELL (Carlson et al., 2010) are extremely useful resources for many NLP tasks (Cucerzan, 2007; Schuhmacher and Ponzetto, 2014). They provide large collections of facts about entities and their relations, typically stored as (head entity, relation, tail entity) triples, e.g., (Paris, capitalOf, France). Although such KBs can be impressively large, they are still quite incomplete and missing crucial facts, which may reduce their usefulness in downstream tasks (West et al., 2014; Choi et al., 2015). KB completion, i.e., automatically inferring missing facts by examining existing ones, has thus attracted increasing attention. Approaches to this task roughly fall into three categories: (i) path ranking algorithms (PRA) (Lao et al., 2011); (ii) embedding techniques (Bordes et al., 2013; Guo et al., 2015); and (iii) graphical models such as Markov logic networks (MLN) (Richardson and Domingos, 2006). This paper focuses on PRA, which is easily interpretable (as opposed to embedding techniques) and requires no external logic rules (as opposed to MLN). The key idea of PRA is to explicitly use paths connecting two entities to predict potential relations between them. In PRA, a KB is encoded as a graph which consists of a set of heterogeneous edges. Each edge is labeled with a relation type that exists between two entities. Given a specific relation, random walks are first employed to find paths between two entities that have the given relation. Here a path is a sequence of relations linking two entities, e.g., h bornIn −−−−−→e capitalOf −−−−−−−−→t. These paths are then used as features in a binary classifier to predict if new instances (i.e., entity pairs) have the given relation. While KBs are naturally composed of multiple relations, PRA models these relations separately during the inference phase, by learning an individual classifier for each relation. We argue, however, that it will be beneficial for PRA to model certain relations in a collective way, particularly when the relations are closely related to each other. For example, given two relations bornIn and livedIn, 1308 there must be a lot of paths (features) that are predictive for both relations, e.g., h nationality −−−−−−−−−−→e hasCapital −−−−−−−−−→t. These features make the corresponding relation classification tasks highly related. Numerous studies have shown that learning multiple related tasks simultaneously (a.k.a. multi-task learning) usually leads to better predictive performance, profiting from the relevant information available in different tasks (Carlson et al., 2010; Chapelle et al., 2010). This paper proposes a novel multi-task learning framework that couples the path ranking of multiple relations, referred to as coupled PRA (CPRA). The new model needs to answer two critical questions: (i) which relations should be coupled, and (ii) in what manner they should be coupled. As to the first question, it is obvious that not all relations are suitable to be learned together. For instance, modeling bornIn together with hasWife might not bring any real benefits, since there are few common paths between these two relations. CPRA introduces a common-path based similarity measure, and accordingly devises an agglomerative clustering strategy to group relations. Only relations that are grouped into the same cluster will be coupled afterwards. As to the second question, CPRA follows the common practice of multi-task learning (Evgeniou and Pontil, 2004), and couples relations by using classifiers with partially shared parameters. Given a cluster of relations, CPRA builds the classifiers upon (i) relation-specific parameters to address the specifics of individual relations, and (ii) shared parameters to model the commonalities among different relations. These two types of parameters are balanced by a coupling coefficient, and learned jointly for all relations. In this way CPRA couples the classification tasks of multiple relations, and enables implicit data sharing and regularization. The major contributions of this paper are as follows. (i) We design a novel framework for multitask learning with PRA, i.e., CPRA. To the best of our knowledge, this is the first study on multi-task PRA. (ii) We empirically verify the effectiveness of CPRA on a real-world, large-scale KB. Specifically, we evaluate CPRA on benchmark data created from Freebase. Experimental results show that CPRA can effectively identify coherent clusters in which relations are highly correlated. By further coupling such relations, CPRA substantially outperforms PRA, in terms of not only predictive accuracy but also model interpretability. (iii) We compare CPRA and PRA to the embedding-based TransE model (Bordes et al., 2013), and demonstrate their superiority over TransE. As far as we know, this is the first work that formally compares PRA-style approaches to embedding-based ones, on publicly available Freebase data. In the remainder of this paper, we first review related work in Section 2, and formally introduce PRA in Section 3. We then detail the proposed CPRA framework in Section 4. Experiments and results are reported in Section 5, followed by the conclusion and future work in Section 6. 2 Related Work We first review three lines of related work: (i) KB completion, (ii) PRA and its extensions, and (iii) multi-task learning, and then discuss the connection between CPRA and previous approaches. KB completion. This task is to automatically infer missing facts from existing ones. Prior work roughly falls into three categories: (i) path ranking algorithms (PRA) which use paths that connect two entities to predict potential relations between them (Lao et al., 2011; Lao and Cohen, 2010); (ii) embedding-based models which embed entities and relations into a latent vector space and make inferences in that space (Nickel et al., 2011; Bordes et al., 2013); (iii) probabilistic graphical models such as the Markov logic network (MLN) and its variants (Pujara et al., 2013; Jiang et al., 2012). This paper focuses on PRA, since it is easily interpretable (as opposed to embedding-based models) and requires no external logic rules (as opposed to MLN and its variants). PRA and its extensions. PRA is a random walk inference technique designed for predicting new relation instances in KBs, first proposed by Lao and Cohen (2010). Recently various extensions have been explored, ranging from incorporating a text corpus as additional evidence during inference (Gardner et al., 2013; Gardner et al., 2014), to introducing better schemes to generate more predictive paths (Gardner and Mitchell, 2015; Shi and Weninger, 2015), or using PRA in a broader context such as Google’s Knowledge Vault (Dong et al., 2014). All these approaches are based on some single-task version of PRA, while our work explores multi-task learning for it. Multi-task learning. Numerous studies have shown that learning multiple related tasks simulta1309 neously can provide significant benefits relative to learning them independently (Caruana, 1997). A key ingredient of multi-task learning is to model the notion of task relatedness, through either parameter sharing (Evgeniou and Pontil, 2004; Ando and Zhang, 2005) or feature sharing (Argyriou et al., 2007; He et al., 2014). In recent years, there has been increasing work showing the benefits of multi-task learning in NLP-related tasks, such as relation extraction (Jiang, 2009; Carlson et al., 2010) and machine translation (Sennrich et al., 2013; Cui et al., 2013; Dong et al., 2015). This paper investigates the possibility of multi-task learning with PRA, in a parameter sharing manner. Connection with previous methods. Actually, modeling multiple relations collectively is a common practice in embedding-based approaches. In such a method, embeddings are learned jointly for all relations, over a set of shared latent features (entity embeddings), and hence can capture meaningful associations among different relations. As shown by (Toutanova and Chen, 2015), observed features such as PRA paths usually perform better than latent features for KB completion. In this context, CPRA is designed in a way that gets the multi-relational benefit of embedding techniques while keeping PRA-style path features. Nickel et al. (2014) and Neelakantan et al. (2015) have tried similar ideas. However, their work focuses on improving embedding techniques with observed features, while our approach aims at improving PRA with multi-task learning. 3 Path Ranking Algorithm PRA was first proposed by Lao and Cohen (2010), and later slightly modified in various ways (Gardner et al., 2014; Gardner and Mitchell, 2015). The key idea of PRA is to explicitly use paths that connect two entities as features to predict potential relations between them. Here a path is a sequence of relations ⟨r1, r2, · · · , rℓ⟩that link two entities. For example, ⟨bornIn, capitalOf⟩is a path linking SophieMarceau to France, through an intermediate node Paris. Such paths are then used as features to predict the presence of specific relations, e.g., nationality. A typical PRA model consists of three steps: feature extraction, feature computation, and relation-specific classification. Feature extraction. The first step is to generate and select path features that are potentially useful for predicting new relation instances. To this end, PRA first encodes a KB as a multi-relation graph. Given a pair of entities (h, t), PRA then finds the paths by performing random walks over the graph, recording those starting from h and ending at t with bounded lengths. More exhaustive strategies like breadth-first (Gardner and Mitchell, 2015) or depth-first (Shi and Weninger, 2015) search could also be used to enumerate the paths. After that a set of paths are selected as features, according to some precision-recall measure (Lao et al., 2011), or simply frequency (Gardner et al., 2014). Feature computation. Once path features are selected, the next step is to compute their values. Given an entity pair (h, t) and a path π, PRA computes the feature value as a random walk probability p(t|h, π), i.e., the probability of arriving at t given a random walk starting from h and following exactly all relations in π. Computing these random walk probabilities could be at great expense. Gardner and Mitchell (2015) recently showed that such probabilities offer no discernible benefits. So they just used a binary value to indicate the presence or absence of each path. Similarly, Shi and Weninger (2015) used the frequency of a path as its feature value. Besides paths, other features such as path bigrams and vector space similarities could also be incorporated (Gardner et al., 2014). Relation-specific classification. The last step of PRA is to train an individual classifier for each relation, so as to judge whether two entities should be linked by that relation. Given a relation and a set of training instances (i.e., pairs of entities that are linked by the relation or not, with features selected and computed as above), one can use any kind of classifier to train a model. Most previous work simply chooses logistic regression. 4 Coupled Path Ranking Algorithm As we can see, PRA (as well as its variants) follows a single-task learning paradigm, which builds a classifier for each relation independently with its own training data. We argue that such a single-task strategy might not be optimal for KB completion: (i) by learning the classifiers independently, it fails to discover and leverage meaningful associations among different relations; (ii) it might not perform well on less frequent relations for which only a few training instances are available. This section presents coupled PRA (CPRA), a novel multi-task learning framework that couples the path ranking of multiple relations. Through a multi-task strat1310 egy, CPRA takes into account relation association and enables implicit data sharing among them. 4.1 Problem Formulation Suppose we are given a KB containing a collection of triples O = {(h, r, t)}. Each triple is composed of two entities h, t ∈E and their relation r ∈R, where E is the entity set and R the relation set. The KB is then encoded as a graph G, with entities represented as nodes, and triple (h, r, t) a directed edge from node h to node t. We formally define KB completion as a binary classification problem. That is, given a particular relation r, for any entity pair (h, t) such that (h, r, t) /∈O, we would like to judge whether h and t should be linked by r, by exploiting the graph structure of G. Let R ⊆R denote a set of relations to be predicted. Each relation r ∈R is associated with a set of training instances. Here a training instance is an entity pair (h, t), with a positive label if (h, r, t) ∈ O or a negative label otherwise.1 For each of the entity pairs, path features could be extracted and computed using techniques described in Section 3. We denote by Πr the set of path features extracted for relation r, and define its training set as Tr = {(xir, yir)}. Here xir is the feature vector for an entity pair, with each dimension corresponding to a path π ∈Πr, and yir = ±1 is the label. Note that our primary goal is to verify the possibility of multi-task learning with PRA. It is beyond the scope of this paper to further explore better feature extraction or computation. Given the relations and their training instances, CPRA performs KB completion using a multi-task learning strategy. It consists of two components: relation clustering and relation coupling. The former automatically discovers highly correlated relations, and the latter further couples the learning of these relations, described in detail as follows. 4.2 Relation Clustering It is obvious that not all relations are suitable to be coupled. We propose an agglomerative clustering algorithm to automatically discover relations that are highly correlated and should be learned together. Our intuition is that relations sharing more common paths (features) are probably more similar in classification, and hence should be coupled. Specifically, we start with |R| clusters and each cluster contains a single relation r ∈R. Here |·| is 1We will introduce the details of generating negative training instances in Section 5.1. the cardinality of a set. Then we iteratively merge the most similar clusters, say Cm and Cn, into a new cluster C. The similarity between two clusters is defined as: Sim(Ci, Cj) = |ΠCi ∩ΠCj| min(|ΠCi|, |ΠCj|), (1) where ΠCi is the feature set associated with cluster Ci (if Ci contains a single relation, ΠCi the feature set associated with that relation). It essentially measures the overlap between two feature sets. The larger the overlap is, the higher the similarity will be. Once two clusters are merged, we update the feature set associated with the new cluster: ΠC = ΠCm ∪ΠCn. The algorithm stops when the highest cluster similarity is below some predefined threshold δ. This paper empirically sets δ = 0.5. As such, relations sharing a substantial number of common paths are grouped into the same cluster. 4.3 Relation Coupling After clustering, the next step of CPRA is to couple the path ranking of different relations within each cluster, i.e., to learn the classification tasks for these relations simultaneously. We employ a multi-task classification algorithm similar to (Evgeniou and Pontil, 2004), and learn the classifiers jointly in a parameter sharing manner. Consider a cluster containing K relations C = {r1, r2, · · · , rK}. Recall that during the clustering phase a shared feature set has been generated for that cluster, i.e., ΠC = Πr1 ∪· · · ∪ΠrK. We first reform the training instances for the K relations using this shared feature set, so that all training data is represented in the same space.2 We denote by Tk = {(xik, yik)}Nk i=1 the reformed training data associated with the k-th relation. Then our goal is to jointly learn K classifiers f1, f2, · · · , fK such that fk(xik) ≈yik. We first assume that the classifier for each relation has a linear form fk(x)=wk · x + bk, where wk ∈Rd is the weight vector and bk the bias. To model associations among different relations, we further assume that all wk and bk can be written, for every k ∈{1, · · · , K}, as: wk = w0 + vk and bk = b0. (2) Here the shared w0 is used to model the commonalities among different relations, and the relationspecific vk to address the specifics of individual 2Note that Πrk ⊆ΠC. We just assign zero values to features that are contained in ΠC but not in Πrk. 1311 relations. If the relations are closely related (vk ≈ 0), they will have similar weights (wt ≈w0) on the common paths. We use the same bias b0 for all the relations.3 We estimate vk, w0, and b0 simultaneously in a joint optimization problem, defined as follows. Problem 1 CPRA amounts to solving the general optimization problem: min {vk},w0,b0 K ∑ k=1 Nk ∑ i=1 ℓ(xik, yik) + λ1 K K ∑ k=1 ∥vk∥2 2 + λ2 ∥w0∥2 2 , where ℓ(xik, yik) is the loss on a training instance. It can be instantiated into a logistic regression (LR) or support vector machine (SVM) version, by respectively defining the loss ℓ(xik, yik) as: ℓ(xik, yik) = log (1 + exp (−yikfk(xik))) , ℓ(xik, yik) = [1 −yikfk(xik)]+ , where fk(xik) = (w0 + vk) · xik + b0. We call them CPRA-LR and CPRA-SVM respectively. In this problem, λ1 and λ2 are regularization parameters. By adjusting their values, we control the degree of parameter sharing among different relations. The larger the ratio λ1 λ2 is, the more we believe that all wt should conform to the common model w0, and the smaller the relation-specific weight vt will be. The multi-task learning problem can be directly linked to a standard single-task learning one, built on all training data from different relations. Proposition 1 Suppose the training data associated with the k-th relation, for every k=1, · · · , K, is transformed into: exik = [ xik √ρK , 0, · · · , 0 | {z } k−1 , xik, 0, · · · , 0 | {z } K−k ], where 0 ∈Rd is a vector whose coordinates are all zero, and ρ = λ2 λ1 a coupling coefficient. Consider a linear classifier for the transformed data ef(ex) = ew · ex + eb, with ew and eb constructed as: ew = [ √ ρKw0, v1, · · · , vK] and eb = b0. Then the objective function of Problem 1 is equivalent to: L = K ∑ k=1 Nk ∑ i=1 eℓ(exik, yik) + eλ ∥ew∥2 2 , 3It implicitly assumes that all the relations have the same proportion of positive instances. This assumption actually holds since given any relation we can always generate the same number of negative instances for each positive one. We set this number to 4 in our experiments. where eℓ= log(1 + exp(−yik ef(exik))) is a logistic loss for CPRA-LR, and eℓ= [1 −yik ef(exik)]+ a hinge loss for CPRA-SVM; and eλ = λ1 K . That means, after transforming data from different relations into a unified representation, Problem 1 is equivalent to a standard single-task learning problem, built on the transformed data from all the relations. So it can easily be solved by existing tools such as LR or SVM. 5 Experiments In this section we present empirical evaluation of CPRA in the KB completion task. 5.1 Experimental Setups We create our data on the basis of FB15K (Bordes et al., 2011)4, a relatively dense subgraph of Freebase containing 1,345 relations and the corresponding triples. KB graph construction. We notice that in most cases FB15K encodes a relation and its reverse relation at the same time. That is, once a new fact is observed, FB15K creates two triples for it, e.g., (x, film/edited-by, y) and (y, editor/film, x). Reverse relations provide no additional knowledge. They may even hurt the performance of PRA-style methods. Actually, to enhance graph connectivity, PRA-style methods usually automatically add an inverse version for each relation in a KB (Lao and Cohen, 2010; Lao et al., 2011). That is, for each observed triple (h, r, t), another triple (t, r−1, h) is constructed and added to the KB. Consider the prediction of a relation, say film/edited-by. In the training phase, we could probably find that every two entities connected by this relation are also connected by the path editor/film−1, and hence assign an extremely high weight to it.5 However, in the testing phase, for any entity pair (x, y) such that (y, editor/film, x) has not been encoded, we might not even find that path and hence could always make a negative prediction.6 For this reason, we remove reverse relations in FB15K. Specifically, we regard r2 to be a reverse relation of r1 if the triple (t, r2, h) holds whenever (h, r1, t) is observed, and we randomly discard 4https://everest.hds.utc.fr/doku.php?id=en:smemlj12 5For every observed triple (x, film/edited-by, y), FB15K also encodes (y, editor/film, x), for which (x, editor/film−1, y) is further constructed. 6Note that such test cases are generally more meaningful: if we already know (y, editor/film, x), predicting (x, film/edited-by, y) could be trivial. 1312 one of the two relations.7 As such, we keep 774 out of 1,345 relations in FB15K, covering 14,951 entities and 327,783 triples. Then we build a graph based on this data and use it as input to CPRA (and our baseline methods). Labeled instance generation. We select 171 relations to test our methods. To do so, we pick 10 popular domains, including award, education, film, government, location, music, olympics, organization, people, and tv. Relations in these domains with at least 50 triples observed for them are selected. For each of the 171 relations, we split the associated triples into roughly 80% training, 10% validation, and 10% testing. Since the triple number varies significantly among the relations, we allow at most 200 validation/testing triples for each relation, so as to make the test cases as balanced as possible. Note that validation and testing triples are not used for constructing the graph. We generate positive instances for each relation directly from these triples. Given a relation r and a triple (h, r, t) observed for it (training, validation, or testing), we take the pair of entities (h, t) as a positive instance for that relation. Then we follow (Shi and Weninger, 2015; Krompaß et al., 2015) to generate negative instances. Given each positive instance (h, t) we generate four negative ones, two by randomly corrupting the head h, and the other two the tail t. To make the negative instances as difficult as possible, we corrupt a position using only entities that have appeared in that position. That means, given the relation capitalOf and the positive instance (Paris, France), we could generate a negative instance (Paris, UK) but never (Paris, NBA), since NBA never appears as a tail entity of the relation. We further ensure that the negative instances do not overlap with the positive ones. Feature extraction and computation. Given the labeled instances, we extract path features for them using the code provided by Shi and Weninger (2015)8. It is a depth-first search strategy that enumerates all paths between two entities. We set the maximum path length to be ℓ= 3. There are about 8.2% of the labeled instances for which no path could be extracted. We remove such cases, giving on average about 5,250 training, 323 validation, and 331 testing instances per relation. Then we remove paths that appear only once in each relation, getting 5,515 features on average per relation. We 7We still add an inverse version for the relation kept during path extraction. 8https://github.com/nddsg/KGMiner # Relations 774 # Entities 14,951 # Triples 327,783 # Relations tested 171 # Avg. training instances/relation 5,250 # Avg. validation instances/relation 323 # Avg. testing instances/relation 331 # Avg. features/relation 5,515 Table 1: Statistics of the data. simply compute the value of each feature as its frequency in an instance. Table 1 lists the statistics of the data used in our experiments. Evaluation metrics. As evaluation metrics, we use mean average precision (MAP) and mean reciprocal rank (MRR), following recent work evaluating KB completion performance (West et al., 2014; Gardner and Mitchell, 2015). Both metrics evaluate some ranking process: if a method ranks the positive instances before the negative ones for each relation, it will get a high MAP or MRR. Baseline methods. We compare CPRA to traditional single-task PRA. CPRA first groups the 171 relations into clusters, and then learns classifiers jointly for relations within the same cluster. We implement two versions of it: CPRA-LR and CPRA-SVM. As we have shown in Proposition 1, both of them could be solved by standard classification tools. PRA learns an individual classifier for each of the relations, using LR or SVM classification techniques, denoted by PRA-LR or PRASVM. We use LIBLINEAR (Fan et al., 2008)9 to solve the LR and SVM classification problems. For all these methods, we tune the cost c in the range of {2−5, 2−4, · · · , 24, 25}. And we set the coupling coefficient ρ = λ2 λ1 in CPRA in the range of {0.1, 0.2, 0.5, 1, 2, 5, 10}. We further compare CPRA to TransE, a widely adopted embedding-based method (Bordes et al., 2013). TransE learns vector representations for entities and relations (i.e., embeddings), and uses the learned embeddings to determine the plausibility of missing facts. Such plausibility can then be used to rank the labeled instances. We implement TransE using the code provided by Bordes et al. (2013)10. To learn embeddings, we take as input the triples used to construct the graph (from which CPRA and PRA extract their paths). We tune the embedding dimension in {20, 50, 100}, the margin in {0.1, 0.2, 0.5, 1, 2, 5}, and the learning rate 9http://www.csie.ntu.edu.tw/ cjlin/liblinear 10https://github.com/glorotxa/SME 1313 film/casting-director gov-jurisdiction/dist-represent film/cinematography location/contain film/costume-design-by location/adjoin film/art-direction-by us-county/county-seat film/crewmember county-place/county film/set-decoration-by location/partially-contain film/production-design-by region/place-export film/edited-by film/written-by film/story-by org/place-founded country/divisions org/headquarter-city country/capital org/headquarter-state country/fst-level-divisions org/geographic-scope country/snd-level-divisions org/headquarter-country admin-division/capital org/service-location tv/tv-producer music-group-member/instrument tv/recurring-writer music-artist/recording-role tv/program-creator music-artist/track-role tv/regular-appear-person music-group-member/role tv/tv-actor Table 2: Six largest clusters of relations (with the stopping criterion δ = 0.5). in {10−4, 10−3, 10−2, 10−1, 1}. For details please refer to (Bordes et al., 2013). For each of these methods, we select the optimal configuration that leads to the highest MAP on the validation set and report its performance on the test set. 5.2 Relation Clustering Results We first test the effectiveness of our agglomerative strategy (Section 4.2) in relation clustering. With the stopping criterion δ = 0.5, 96 out of the 171 relations are grouped into clusters which contain at least two relations. Each of these 96 relations will later be learned jointly with some other relations. The other 75 relations cannot be merged, and will still be learned individually. Table 2 shows the six largest clusters discovered by our algorithm. Relations in each cluster are arranged in the order they were merged. The results indicate that our algorithm can effectively identify coherent clusters in which relations are highly correlated to each other. For example, the top left cluster describes relations between a film and its crew members, and the middle left between an organization and a location. During clustering we might obtain clusters that contain too many relations and hence too many training instances for our CPRA model to learn efficiently. We split such clusters into sub-clusters, either according to the domain (e.g., the film cluster and tv cluster) or randomly (e.g., the two location clusters on the top right). 5.3 KB Completion Results We further test the effectiveness of our multi-task learning strategy (Section 4.3) in KB completion. Table 3 gives the results on the 96 relations that are actually involved in multi-tasking learning (i.e., grouped into clusters with size larger than one).11 The 96 relations are grouped into 29 clusters, and relations within the same cluster are learned jointly. Table 3 reports (i) MAP and MRR within each cluster and (ii) overall MAP and MRR on the 96 relations. Numbers marked in bold type indicate that CPRA-LR/SVM outperforms PRA-LR/SVM, within a cluster (with its ID listed in the first column) or on all the 96 relations (ALL). We judge statistical significance of the overall improvements achieved by CPRA-LR/SVM over PRA-LR/SVM and TransE, using a paired t-test. The average precision (or reciprocal rank) on each relation is used as paired data. The symbol “∗∗” indicates a significance level of p < 0.0001, and “∗” a significance level of p < 0.05. From the results, we can see that (i) CPRA outperforms PRA (using either LR or SVM) and TransE on the 96 relations (ALL) in both metrics. All the improvements are statistically significant, with a significance level of p < 0.0001 for MAP and a significance level of p < 0.05 for MRR. (ii) CPRA-LR/SVM outperforms PRA-LR/SVM in 22/24 out of the 29 clusters in terms of MAP. Most of the improvements are quite substantial. (iii) Improving PRA-LR and PRA-SVM in terms of MRR could be hard, since they already get the best performance (MRR = 1) in 19 out of the 29 clusters. But even so, CPRA-LR/SVM still improves 7/8 out of the remaining 10 clusters. (iv) The PRAstyle methods perform substantially better than the embedding-based TransE model in most of the 29 clusters and on all the 96 relations. This observation demonstrates the superiority of observed features (i.e., PRA paths) over latent features. Table 4 further shows the top 5 most discriminative paths (i.e., features with the highest weights) discovered by PRA-SVM (left) and CPRA-SVM (right) for each relation in the 6th cluster.12 The average precision on each relation is also provid11The other 75 relations are still learned individually. So CPRA and PRA perform the same on these relations. The MAP values on these 75 relations are 0.6360, 0.6558, 0.6543 for TransE, PRA-LR, and PRA-SVM respectively, and the MRR values are 0.9049, 0.9033, and 0.9013 respectively. 12This is one of the largest clusters on which CPRA-SVM improves PRA-SVM substantially. 1314 MAP MRR TransE PRA-LR CPRA-LR PRA-SVM CPRA-SVM TransE PRA-LR CPRA-LR PRA-SVM CPRA-SVM 1 0.5419 0.5160 0.5408 0.4687 0.5204 0.7500 0.8333 1.0000 0.7778 0.8333 2 0.7480 0.7888 0.7807 0.8010 0.8092 1.0000 1.0000 1.0000 1.0000 1.0000 3 0.4624 0.4625 0.4788 0.4634 0.4560 0.8333 1.0000 1.0000 1.0000 0.8333 4 0.5495 0.5378 0.5423 0.5385 0.5460 0.7667 0.6400 0.7000 0.7167 0.7000 5 0.5164 0.5789 0.6030 0.5891 0.6072 0.8333 0.6667 1.0000 0.8333 1.0000 6 0.6918 0.7733 0.7950 0.7369 0.8084 1.0000 0.8333 0.8333 0.8056 0.9167 7 0.7381 0.7531 0.7754 0.7456 0.7414 0.7500 1.0000 1.0000 1.0000 1.0000 8 0.4258 0.5180 0.5446 0.3162 0.4606 1.0000 1.0000 1.0000 0.3056 0.7500 9 0.6353 0.7879 0.7708 0.7680 0.7685 0.7500 1.0000 1.0000 1.0000 1.0000 10 0.8615 0.7773 0.7738 0.7618 0.7507 1.0000 1.0000 1.0000 1.0000 1.0000 11 0.4549 0.5814 0.6014 0.5717 0.5896 0.8333 1.0000 1.0000 0.8750 1.0000 12 0.6202 0.7187 0.7479 0.7455 0.7457 0.7500 0.5833 1.0000 1.0000 1.0000 13 0.5530 0.6681 0.6716 0.6373 0.6502 0.6667 1.0000 1.0000 1.0000 1.0000 14 0.5082 0.4360 0.5280 0.4715 0.5806 0.3750 0.6667 0.6250 1.0000 1.0000 15 0.9881 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 16 0.5324 0.6818 0.6863 0.6522 0.6705 0.8750 1.0000 1.0000 1.0000 1.0000 17 0.3759 0.3351 0.3593 0.3273 0.3219 0.6111 0.5667 0.7778 0.5111 0.6667 18 0.9423 0.9968 1.0000 0.9947 0.9975 1.0000 1.0000 1.0000 1.0000 1.0000 19 0.7903 0.8376 0.8310 0.8296 0.8328 0.8714 0.9286 0.8571 0.8571 0.8571 20 0.7920 0.8285 0.8746 0.8491 0.8754 1.0000 1.0000 1.0000 1.0000 1.0000 21 0.4885 0.5869 0.5799 0.5554 0.5952 0.6250 1.0000 1.0000 1.0000 1.0000 22 0.7894 0.8371 0.8486 0.8371 0.8374 1.0000 1.0000 1.0000 1.0000 1.0000 23 0.7123 0.7848 0.8191 0.7811 0.7957 0.9500 1.0000 1.0000 1.0000 1.0000 24 0.5982 0.7923 0.8048 0.8204 0.8220 1.0000 1.0000 1.0000 1.0000 1.0000 25 0.6223 0.8723 0.8723 0.7785 0.8109 0.7500 1.0000 1.0000 1.0000 1.0000 26 0.5253 0.5377 0.5685 0.5337 0.5447 0.8750 0.8125 0.8750 0.7083 0.8333 27 0.8763 0.6890 0.8124 0.7014 0.8016 1.0000 0.6667 1.0000 0.7500 1.0000 28 0.7588 0.8131 0.8154 0.8130 0.8146 1.0000 1.0000 1.0000 1.0000 1.0000 29 0.4894 0.5921 0.6543 0.6093 0.6566 0.7500 1.0000 1.0000 1.0000 1.0000 ALL 0.6540 0.7058 0.7254∗∗ 0.6943 0.7162∗∗ 0.8682 0.9061 0.9436∗ 0.8982 0.9358∗ Table 3: KB completion results on the 96 relations that have been grouped into clusters with size larger than one (with the stopping criterion δ = 0.5), and hence involved in multi-tasking learning. ed. We can observe that (i) CPRA generally discovers more predictive paths than PRA. Almost all the top paths discovered by CPRA are easily interpretable and provide sensible reasons for the final prediction, while some of the top paths discovered by PRA are hard to interpret and less predictive. Take org/place-founded as an example. All the 5 CPRA paths are useful to predict the place where an organization was founded, e.g., the 3rd one tells that “the organization headquarter in a city which is located in that place”. However, the PRA path “common/class →common/class−1 →film/debutvenue” is hard to interpret and less predictive. (ii) For the 1st/4th/6th relation on which PRA gets a low average precision, CPRA learns almost completely different top paths and gets a substantially higher average precision. While for the other relations (2nd/3rd/5th) on which PRA already performs well enough, CPRA learns similar top paths and gets a comparable average precision. We have conducted the same analyses with CPRA-LR and PRA-LR, and observed similar phenomena. All these observations demonstrate the superiority of CPRA, in terms of not only predictive accuracy but also model interpretability. 6 Conclusion In this paper we have studied the path ranking algorithm (PRA) from the viewpoint of multi-task learning. We have designed a novel multi-task learning framework for PRA, called coupled PRA (CPRA). The key idea of CPRA is to (i) automatically discover relations highly correlated to each other through agglomerative clustering, and (ii) effectively couple the prediction of such relations through multi-task learning. By coupling different relations, CPRA takes into account relation associations and enables implicit data sharing among them. We have tested CPRA on benchmark data created from Freebase. Experimental results show that CPRA can effectively identify coherent clusters in which relations are highly correlated. By further coupling such relations, CPRA significantly outperforms PRA, in terms of both predictive 1315 org/place-founded (0.4920 vs. 0.6750) org/headquarter-city location/contain−1 common/class→common/class−1→film/debut-venue org/headquarter-city common/class→common/class−1→sports-team/location org/headquarter-city→location/contain−1 employer/job-title→employer/job-title−1→location/contain org/headquarter-state→location/contain music-artist/label−1→person/place-of-birth org/headquarter-city→bibs-location/state org/headquarter-city (0.9014 vs. 0.9141) location/contain−1 location/contain−1 org/place-founded org/headquarter-state→location/contain org/headquarter-state→location/contain org/place-founded org/child−1→org/child→org/place-founded org/child−1→org/child→org/place-founded sports-team/location industry/company−1→industry/company→org/place-founded org/headquarter-state (0.9522 vs. 0.9558) location/contain−1 location/contain−1 org/headquarter-city→location/contain−1 org/headquarter-city→location/contain−1 org/headquarter-city→bibs-location/state org/headquarter-city→bibs-location/state org/headquarter-city→county-place/county→location/contain−1 org/headquarter-city org/headquarter-city→location/contain−1→location/contain−1 org/place-founded org/geographic-scope (0.5252 vs. 0.6075) common/class→common/class−1→location/vacationer−1 location/contain−1 common/class→common/class−1→country/languages−1 org/headquarter-city→location/contain−1 common/class→common/class−1→gov-jurisdiction/gov-body−1 location/contain−1→location/contain−1 common/class→common/class−1→region/currency-of-gdp−1 org/place-founded→location/contain−1 politician/party−1→person/nationality→location/adjoins org/headquarter-city→location/contain−1→location/contain−1 org/headquarter-country (0.9859 vs. 0.9938) org/headquarter-city→airline/city-served−1→org/service-location location/contain−1 org/headquarter-city→admin-area/child−1→region/place-export org/headquarter-city→location/contain−1 org/headquarter-city→country/divisions−1→region/place-export org/headquarter-city→county-place/county→location/contain−1 org/headquarter-city→film/feat-location−1→film/feat-location location/contain−1→location/contain−1 org/headquarter-city→gov-jurisdiction/title→employer/job-title−1 org/place-founded→location/contain−1 org/service-location (0.5644 vs. 0.7044) org/headquarter-city→country/divisions−1 org/headquarter-city→location/contain−1 org-extra/service-location org/headquarter-city→county-place/county→location/contain−1 film/production-company−1→film/subjects→admin-area/child−1 location/contain−1→location/contain−1 org/legal-structure→entry/taxonomy→entry/taxonomy−1 org/place-founded→location/contain−1 airline/city-served→region/currency→region/currency-of-gdp−1 org-extra/service-location Table 4: Top paths given by PRA-SVM (left) and CPRA-SVM (right) for each relation in the 6th cluster. accuracy and model interpretability. This is the first work that investigates the possibility of multi-task learning with PRA, and we just provide a very simple solution. There are still many interesting topics to study. For instance, the agglomerative clustering strategy can only identify highly correlated relations, i.e., those sharing a lot of common paths. Relations that are only loosely correlated, e.g., those sharing no common paths but a lot of sub-paths, will not be identified. We would like to design new mechanisms to discover loosely correlated relations, and investigate whether coupling such relations still provides benefits. Another example is that the current method is a two-step approach, performing relation clustering first and then relation coupling. It will be interesting to study whether one can merge the clustering step and the coupling step so as to have a richer inter-task dependent structure. We will investigate such topics in our future work. Acknowledgments We would like to thank Baoxu Shi for providing the code for path extraction. We would also like to thank the anonymous reviewers for their valuable comments and suggestions. This work is supported by the National Natural Science Foundation of China (grant No. 61402465), the Strategic Priority Research Program of the Chinese Academy of Sciences (grant No. XDA06030200), and the Microsoft Research Asia StarTrack Program. This work was done when Quan Wang was a visiting researcher at Microsoft Research Asia. 1316 References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Reseach, 6:1817–1853. Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2007. Multi-task feature learning. In Advances in Neural Information Processing Systems, pages 41–48. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Proceedings of the 25th AAAI Conference on Artificial Intelligence, pages 301–306. Antoine Bordes, Nicolas Usunier, Alberto GarciaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, pages 2787–2795. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr, and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, pages 1306–1313. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. Olivier Chapelle, Pannagadatta Shivaswamy, Srinivas Vadrevu, Kilian Weinberger, Ya Zhang, and Belle Tseng. 2010. Multi-task learning for boosting with application to web search ranking. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1189–1198. Eunsol Choi, Tom Kwiatkowski, and Luke Zettlemoyer. 2015. Scalable semantic parsing with partial ontologies. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1311–1320. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 708– 716. Lei Cui, Xilun Chen, Dongdong Zhang, Shujie Liu, Mu Li, and Ming Zhou. 2013. Multi-domain adaptation for SMT using multi-task learning. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1055– 1065. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 601–610. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1723–1732. Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi-task learning. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 109– 117. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Matt Gardner and Tom Mitchell. 2015. Efficient and expressive knowledge base completion using subgraph feature extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1488–1498. Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, and Tom Mitchell. 2013. Improving learning and inference in a large knowledge-base using latent syntactic cues. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 833–838. Matt Gardner, Partha Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incorporating vector space similarity in random walk inference over knowledge bases. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 397–406. Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2015. Semantically smooth knowledge graph embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 84–94. Jingrui He, Yan Liu, and Qiang Yang. 2014. Linking heterogeneous input spaces with pivots for multitask learning. In Proceedings of the 2014 SIAM International Conference on Data Mining, pages 181– 189. 1317 Shangpu Jiang, Daniel Lowd, and Dejing Dou. 2012. Learning to refine an automatically extracted knowledge base using markov logic. In Proceedings of the 2012 IEEE International Conference on Data Mining, pages 912–917. Jing Jiang. 2009. Multi-task transfer learning for weakly-supervised relation extraction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1012–1020. Denis Krompaß, Stephan Baier, and Volker Tresp. 2015. Type-constrained representation learning in knowledge graphs. In Proceedings of the 13th International Semantic Web Conference, pages 640–655. Ni Lao and William W. Cohen. 2010. Relational retrieval using a combination of path-constrained random walks. Machine Learning, 81(1):53–67. Ni Lao, Tom Mitchell, and William W. Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 529–539. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, et al. 2014. Dbpedia: A largescale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal. Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 156–166. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning, pages 809–816. Maximilian Nickel, Xueyan Jiang, and Volker Tresp. 2014. Reducing the rank in relational factorization models by including observable patterns. In Advances in Neural Information Processing Systems, pages 1179–1187. Jay Pujara, Hui Miao, Lise Getoor, and William Cohen. 2013. Knowledge graph identification. In Proceedings of the 11th International Semantic Web Conference, pages 542–557. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62(12):107–136. Michael Schuhmacher and Simone Paolo Ponzetto. 2014. Knowledge-based graph document modeling. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining, pages 543– 552. Rico Sennrich, Holger Schwenk, and Walid Aransa. 2013. A multi-domain translation model framework for statistical machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 832–840. Baoxu Shi and Tim Weninger. 2015. Fact checking in large knowledge graphs: A discriminative predict path mining approach. In arXiv:1510.05911. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality. Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search-based question answering. In Proceedings of the 23rd International Conference on World Wide Web, pages 515– 526. 1318
2016
124
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1319–1329, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Larger-Context Language Modelling with Recurrent Neural Network∗ Tian Wang Center for Data Science New York University [email protected] Kyunghyun Cho Courant Institute of Mathematical Sciences and Center for Data Science New York University [email protected] Abstract In this work, we propose a novel method to incorporate corpus-level discourse information into language modelling. We call this larger-context language model. We introduce a late fusion approach to a recurrent language model based on long shortterm memory units (LSTM), which helps the LSTM unit keep intra-sentence dependencies and inter-sentence dependencies separate from each other. Through the evaluation on four corpora (IMDB, BBC, Penn TreeBank, and Fil9), we demonstrate that the proposed model improves perplexity significantly. In the experiments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the LSTM. By analyzing the trained larger-context language model, we discover that content words, including nouns, adjectives and verbs, benefit most from an increasing number of context sentences. This analysis suggests that larger-context language model improves the unconditional language model by capturing the theme of a document better and more easily. 1 Introduction The goal of language modelling is to estimate the probability distribution of various linguistic units, e.g., words, sentences (Rosenfeld, 2000). Among the earliest techniques were count-based n-gram language models which intend to assign the probability distribution of a given word observed af∗Recently, (Ji et al., 2015) independently proposed a similar approach. ter a fixed number of previous words. Later Bengio et al. (2003) proposed feed-forward neural language model, which achieved substantial improvements in perplexity over count-based language models. Bengio et al. showed that this neural language model could simultaneously learn the conditional probability of the latest word in a sequence as well as a vector representation for each word in a predefined vocabulary. Recently recurrent neural networks have become one of the most widely used models in language modelling (Mikolov et al., 2010). Long short-term memory unit (LSTM, Hochreiter and Schmidhuber, 1997) is one of the most common recurrent activation function. Architecturally, the memory state and output state are explicitly separated by activation gates such that the vanishing gradient and exploding gradient problems described in Bengio et al. (1994) is avoided. Motivated by such gated model, a number of variants of RNNs (e.g. Cho et al. (GRU, 2014b), Chung et al. (GF-RNN, 2015)) have been designed to easily capture long-term dependencies. When modelling a corpus, these language models assume the mutual independence among sentences, and the task is often reduced to assigning a probability to a single sentence. In this work, we propose a method to incorporate corpus-level discourse dependency into neural language model. We call this larger-context language model. It models the influence of context by defining a conditional probability in the form of P(wn|w1:n−1, S), where w1, ..., wn are words from the same sentence, and S represents the context which consists a number of previous sentences of arbitrary length. We evaluated our model on four different corpora (IMDB, BBC, Penn TreeBank, and Fil9). Our experiments demonstrate that the proposed larger-context language model improve perplex1319 ity for sentences, significantly reducing per-word perplexity compared to the language models without context information. Further, through Part-OfSpeech tag analysis, we discovered that content words, including nouns, adjectives and verbs, benefit the most from increasing number of context sentences. Such discovery led us to the conclusion that larger-context language model improves the unconditional language model by capturing the theme of a document. To achieve such improvement, we proposed a late fusion approach, which is a modification to the LSTM such that it better incorporates the discourse context from preceding sentences. In the experiments, we evaluated the proposed approach against early fusion approach with various numbers of context sentences, and demonstrated the late fusion is superior to the early fusion approach. Our model explores another aspect of contextdependent recurrent language model. It is novel in that it also provides an insightful way to feed information into LSTM unit, which could benefit all encoder-decoder based applications. 2 Statistical Language Modelling with Recurrent Neural Network Given a document D = (S1, S2, . . . , SL) which consists of L sentences, statistical language modelling aims at computing its probability P(D). It is often assumed that each sentence in the whole document is mutually independent from each other: P(D) ≈ L Y l=1 P(Sl). (1) We call this probability (before approximation) a corpus-level probability. Under this assumption of mutual independence among sentences, the task of language modelling is often reduced to assigning a probability to a single sentence P(Sl). A sentence Sl = (w1, w2, . . . , wTl) is a variable-length sequence of words or tokens. By assuming that a word at any location in a sentence is largely predictable by preceding words, we can rewrite the sentence probability into P(S) = Tl Y t=1 p(wt|w<t), (2) where w<t denotes all the preceding words. We call this a sentence-level probability. This rewritten probability expression can be either directly modelled by a recurrent neural network (Mikolov et al., 2010) or further approximated as a product of n-gram conditional probabilities such that P(S) ≈ Tl Y t=1 p(wt|wt−1 t−(n−1)), (3) where wt−1 t−(n−1) = (wt−(n−1), . . . , wt−1). The latter is called n-gram language modelling. A recurrent language model is composed of two functions–transition and output functions. The transition function reads one word wt and updates its hidden state such that ht = φ (wt, ht−1) , (4) where h0 is an all-zero vector. φ is a recurrent activation function. For more details on widelyused recurrent activation units, we refer the reader to (Jozefowicz et al., 2015; Greff et al., 2015). At each timestep, the output function computes the probability over all possible next words in the vocabulary V . This is done by p(wt+1 = w′|wt 1) ∝exp (gw′(ht)) . (5) g is commonly an affine transformation: g(ht) = Woht + bo, where Wo ∈R|V |×d and bo ∈R|V |. The whole model is trained by maximizing the log-likelihood of a training corpus often using stochastic gradient descent with backpropagation through time (see, e.g., Rumelhart et al., 1988). This conventional approach to statistical language modelling often treats every sentence in a document to be independent from each other This is often due to the fact that downstream tasks, such as speech recognition and machine translation, are done sentence-wise. In this paper, we ask how strong an assumption this is, how much impact this assumption has on the final language model quality and how much gain language modelling can get by making this assumption less strong. Long Short-Term Memory Here let us briefly describe a long short-term memory unit which is widely used as a recurrent activation function φ (see Eq. (4)) for language modelling (see, e.g., Graves, 2013). 1320 A layer of long short-term memory (LSTM) unit consists of three gates and a single memory cell. They are computed by it =σ (Wixt + Uiht−1 + bi) ot =σ (Woxt + Uoht−1 + bo) ft =σ (Wfxt + Ufht−1 + bf) , where σ is a sigmoid function. xt is the input at time t. The memory cell is computed by ct = ft ⊙ct−1 + it ⊙tanh (Wcx + Ucht−1 + bc) , where ⊙is an element-wise multiplication. This adaptive leaky integration of the memory cell allows the LSTM to easily capture long-term dependencies in the input sequence. The output, or the activation of this LSTM layer, is then computed by ht = ot ⊙tanh(ct). 3 Larger-Context Language Modelling In this paper, we aim not at improving the sentence-level probability estimation P(S) (see Eq. (2)) but at improving the corpus-level probability P(D) from Eq. (1) directly. One thing we noticed at the beginning of this work is that it is not necessary for us to make the assumption of mutual independence of sentences in a corpus. Rather, similarly to how we model a sentence probability, we can loosen this assumption by P(D) ≈ L Y l=1 P(Sl|Sl−1 l−n), (6) where Sl−1 l−n = (Sl−n, Sl−n+1, . . . , Sl−1). n decides on how many preceding sentences each conditional sentence probability conditions on, similarly to what happens with a usual n-gram language modelling. From the statistical modelling’s perspective, estimating the corpus-level language probability in Eq. (6) is equivalent to build a statistical model that approximates P(Sl|Sl−1 l−n) = Tl Y t=1 p(wt|w<t, Sl−1 l−n), (7) similarly to Eq. (2). One major difference from the existing approaches to statistical language modelling is that now each conditional probability of a next word is conditioned not only on the preceding words in the same sentence, but also on the n −1 preceding sentences. A conventional, count-based n-gram language model is not well-suited due to the issue of data sparsity. In other words, the number of rows in the table storing n-gram statistics will explode as the number of possible sentence combinations grows exponentially with respect to both the vocabulary size, each sentence’s length and the number of context sentences. Either neural or recurrent language modelling however does not suffer from this issue of data sparsity. This makes these models ideal for modelling the larger-context sentence probability in Eq. (7). More specifically, we are interested in adapting the recurrent language model for this. In doing so, we answer two questions in the following subsections. First, there is a question of how we should represent the context sentences Sl−1 l−n. We consider two possibilities in this work. Second, there is a large freedom in how we build a recurrent activation function to be conditioned on the context sentences. We also consider two alternatives in this case. 3.1 Context Representation A sequence of preceding sentences can be represented in many different ways. Here, let us describe two alternatives we test in the experiments. The first representation is to simply bag all the words in the preceding sentences into a single vector s ∈[0, 1]|V |. Any element of s corresponding to the word that exists in one of the preceding sentences will be assigned the frequency of that word, and otherwise 0. This vector is multiplied from left by a matrix P which is tuned together with all the other parameters: p = Ps. We call this representation p a bag-of-words (BoW) context. Second, we try to represent the preceding context sentences as a sequence of bag-of-words. Each bag-of-word sj is the bag-of-word representation of the j-th context sentence, and they are put into a sequence (sl−n, . . . , sl−1). Unlike the first BoW context, this allows us to incorporate the order of the preceding context sentences. This sequence of BoW vectors are read by a recurrent neural network which is separately from the one used for modelling a sentence (see Eq. (4).) We use LSTM units as recurrent activations, and for each context sentence in the sequence, we get zt = φ (xt, zt−1) , for t = l − n, . . . , l −1. We set the last hidden state zl−1 of this context recurrent neural network as the con1321 text vector p. Attention-based Context Representation The sequence of BoW vectors can be used in a bit different way from the above. Instead of a unidirectional recurrent neural network, we first use a bidirectional recurrent neural network to read the sequence. The forward recurrent neural network reads the sequence as usual in a forward direction, and the reverse recurrent neural network in the opposite direction. The hidden states from these two networks are then concatenated for each context sentence in order to form a sequence of annotation vectors (zl−n, . . . , zl−1). Unlike the other approaches, in this case, the context vector p differs for each word wt in the current sentence, and we denote it by pt. The context vector pt for the t-th word is computed as the weighted sum of the annotation vectors: pt = l−1 X l′=l−n αt,l′zl′, where the attention weight αt,l′ is computed by αt,l′ = exp score (zl′, ht) Pl−1 k=l−n exp score (zk, ht) . ht is the hidden state of the recurrent language model of the current sentence from Eq. (5). The scoring function score(zl′, ht) returns a relevance score of the l′-th context sentence w.r.t. ht. 3.2 Conditional LSTM Early Fusion Once the context vector p is computed from the n preceding sentences, we need to feed this into the sentence-level recurrent language model. One most straightforward way is to simply consider it as an input at every time step such that x = E⊤wt + Wpp, where E is the word embedding matrix that transforms the one-hot vector of the t-th word into a continuous word vector. We call this approach an early fusion of the context. Late Fusion In addition to this approach, we propose here a modification to the LSTM such that it better incorporates the context from the preceding sentences (summarized by pt.) The basic idea is to keep dependencies within the sentence being modelled (intra-sentence dependencies) and those between the preceding sentences (a) Early Fusion (b) Late Fusion Figure 1: Proposed fusion methods and the current sent (inter-sentence dependencies) separately from each other. We let the memory cell ct of the LSTM to model intra-sentence dependencies. This simply means that there is no change to the existing formulation of the LSTM. The inter-sentence dependencies are reflected on the interaction between the memory cell ct, which models intra-sentence dependencies, and the context vector p, which summarizes the n preceding sentences. We model this by first computing the amount of influence of the preceding context sentences as rt = σ (Wr (Wpp) + Wrct + br) . This vector rt controls the strength of each of the elements in the context vector p. This amount of influence from the n preceding sentences is decided based on the currently captured intrasentence dependency structures and the preceding sentences. This controlled context vector rt ⊙(Wpp) is used to compute the output of the LSTM layer: ht = ot ⊙tanh (ct + rt ⊙(Wpp)) . This is illustrated in Fig. 1 (b). We call this approach a late fusion, as the effect of the preceding context is fused together with 1322 the intra-sentence dependency structure in the later stage of the recurrent activation. Late fusion is a simple, but effective way to mitigate the issue of vanishing gradient in corpuslevel language modelling. By letting the context representation flow without having to pass through saturating nonlinear activation functions, it provides a linear path through which the gradient for the context flows easily. 4 Related Work Context-dependent Language Model This possibility of extending a neural or recurrent language modeling to incorporate larger context was explored earlier. Especially, (Mikolov and Zweig, 2012) proposed an approach, called context-dependent recurrent neural network language model, very similar to the proposed approach here. The basic idea of their approach is to use a topic distribution, represented as a vector of probabilities, of previous n words when computing the hidden state of the recurrent neural network each time. There are three major differences in the proposed approach from the work by Mikolov and Zweig (2012). First, the goal in this work is to explicitly model preceding sentences to better approximate the corpus-level probability (see Eq. (6)) rather than to get a better context of the current sentence. Second, Mikolov and Zweig (2012) use an external method, such as latent Dirichlet allocation (Blei et al., 2003) or latent semantics analysis (Dumais, 2004) to extract a feature vector, whereas we learn the whole model, including the context vector extraction, end-to-end. Third, we propose a late fusion approach which is well suited for the LSTM units which have recently been widely adopted many works involving language models (see, e.g., Sundermeyer et al., 2015). This late fusion is later shown to be superior to the early fusion approach. Dialogue Modelling with Recurrent Neural Networks A more similar model to the proposed larger-context recurrent language model is a hierarchical recurrent encoder decoder (HRED) proposed recently by Serban et al. (2015). The HRED consists of three recurrent neural networks to model a dialogue between two people from the perspective of one of them, to which we refer as a speaker. If we consider the last utterance of the speaker, the HRED is a larger-context recurrent language model with early fusion. Aside the fact that the ultimate goals differ (in their case, dialogue modelling and in our case, document modelling), there are two technical differences. First, they only test with the early fusion approach. We show later in the experiments that the proposed late fusion gives a better language modelling quality than the early fusion. Second, we use a sequence of bag-of-words to represent the preceding sentences, while the HRED a sequence of sequences of words. This allows the HRED to potentially better model the order of the words in each preceding sentence, but it increases computational complexity (one more recurrent neural network) and decreases statistical efficient (more parameters with the same amount of data.) Skip-Thought Vectors Perhaps the most similar work is the skip-thought vector by Kiros et al. (2015). In their work, a recurrent neural network is trained to read a current sentence, as a sequence of words, and extract a so-called skip-thought vector of the sentence. There are two other recurrent neural networks which respectively model preceding and following sentences. If we only consider the prediction of the following sentence, then this model becomes a larger-context recurrent language model which considers a single preceding sentence as a context. As with the other previous works we have discussed so far, the major difference is in the ultimate goal of the model. Kiros et al. (2015) fully focused on using their model to extract a good, generic sentence vector, while in this paper we are focused on obtaining a good language model. There are less major technical differences. First, the skip-thought vector model conditions only on the immediate preceding sentence, while we extend this to multiple preceding sentences. Second, similarly to the previous works by Mikolov and Zweig (2012), the skip-thought vector model only implements early fusion. Neural Machine Translation Neural machine translation is another related approach (Forcada and ˜Neco, 1997; Kalchbrenner and Blunsom, 2013; Cho et al., 2014b; Sutskever et al., 2014; Bahdanau et al., 2014). In neural machine translation, often two recurrent neural networks are used. The first recurrent neural network, called an encoder, reads a source sentence, represented as a sequence of words in a source language, to form a context vector, or a set of context vectors. The 1323 other recurrent neural network, called a decoder, then, models the target translation conditioned on this source context. This is similar to the proposed larger-context recurrent language model, if we consider the source sentence as a preceding sentence in a corpus. The major difference is in the ultimate application, machine translation vs. language modelling, and technically, the differences between neural machine translation and the proposed larger-context language model are similar to those between the HRED and the larger-context language model. Context-Dependent Question-Answering Models Context-dependent question-answering is a task in which a model is asked to answer a question based on the facts from a natural language paragraph. The question and answer are often formulated as filling in a missing word in a query sentence (Hermann et al., 2015; Hill et al., 2015). This task is closely related to the larger-context language model we proposed in this paper in the sense that its goal is to build a model to learn p(qk|q<k, q>k, D), (8) where qk is the missing k-th word in a query Q, and q<k and q>k are the context words from the query. D is the paragraph containing facts about this query. It is explicitly constructed so that the query q does not appear in the paragraph D. It is easy to see the similarity between Eq. (8) and one of the conditional probabilities in the r.h.s. of Eq. (7). By replacing the context sentences Sl−1 l−n in Eq. (7) with D in Eq. (8) and conditioning wt on both the preceding and following words, we get a context-dependent questionanswering model. In other words, the proposed larger-context language model can be used for context-dependent question-answering, however, with computational overhead. The overhead comes from the fact that for every possible answer the conditional probability completed query sentence must be evaluated. 5 Experimental Settings 5.1 Models There are six possible combinations of the proposed methods. First, there are two ways of representing the context sentences; (1) bag-of-words (BoW) and (2) a sequence of bag-of-words (SeqBoW), from Sec. 3.1. There are two separate ways to incorporate the SeqBoW; (1) with attention mechanism (ATT) and (2) without it. Then, there are two ways of feeding the context vector into the main recurrent language model (RLM); (1) early fusion (EF) and (2) late fusion (LF), from Sec. 3.2. We will denote them by 1. RLM-BoW-EF-n 2. RLM-SeqBoW-EF-n 3. RLM-SeqBoW-ATT-EF-n 4. RLM-BoW-LF-n 5. RLM-SeqBoW-LF-n 6. RLM-SeqBoW-ATT-LF-n n denotes the number of preceding sentences to have as a set of context sentences. We test four different values of n; 1, 2, 4 and 8. As a baseline, we also train a recurrent language model without any context information. We refer to this model by RLM. Furthermore, we also report the result with the conventional, count-based n-gram language model with the modified KneserNey smoothing with KenLM (Heafield et al., 2013). Each recurrent language model uses 1000 LSTM units and is trained with Adadelta (Zeiler, 2012) to maximize the log-likelihood; L(θ) = 1 K PK k=1 log p(Sk|Sk−1 k−n). We early-stop training based on the validation log-likelihood and report the perplexity on the test set using the best model according to the validation log-likelihood. We use only those sentences of length up to 50 words when training a recurrent language model for the computational reason. For KenLM, we used all available sentences in a training corpus. 5.2 Datasets We evaluate the proposed larger-context language model on three different corpora. For detailed statistics, see Table 1. IMDB Movie Reviews A set of movie reviews is an ideal dataset to evaluate many different settings of the proposed larger-context language models, because each review is highly likely of a single theme (the movie under review.) A set of words or the style of writing will be well determined based on the preceding sentences. We use the IMDB Movie Review Corpus (IMDB) prepared by Maas et al. (2011).1 This corpus has 75k training reviews and 25k test reviews. 1http://ai.stanford.edu/˜amaas/data/ sentiment/ 1324 (a) IMDB (b) Penn Treebank (c) BBC (d) Fil9 Figure 2: Corpus-level perplexity on (a) IMDB, (b) Penn Treebank, (c) BBC and (d) Fil9. The countbased 5-gram language models with Kneser-Ney smoothing respectively resulted in the perplexities of 110.20, 148, 127.32 and 65.21, and are not shown here. We use the 30k most frequent words in the training corpus for recurrent language models. BBC Similarly to movie reviews, each new article tends to convey a single theme. We use the BBC corpus prepared by Greene and Cunningham (2006).2 Unlike the IMDB corpus, this corpus contains news articles which are almost always written in a formal style. By evaluating the proposed approaches on both the IMDB and BBC corpora, we can tell whether the benefits from larger context exist in both informal and formal languages. We use the 10k most frequent words in the training corpus for recurrent language models. Both with the IMDB and BBC corpora, we did not do any preprocessing other than tokenization.3 Penn Treebank We evaluate a normal recurrent language model, count-based n-gram language model as well as the proposed RLM-BoW-EF-n and RLM-BoW-LF-n with varying n = 1, 2, 4, 8 on the Penn Treebank Corpus. We preprocess the 2http://mlg.ucd.ie/datasets/bbc.html 3https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ tokenizer/tokenizer.perl corpus according to (Mikolov et al., 2011) and use a vocabulary of 10k words from the training corpus. Fil9 Fil9 is a cleaned Wikipedia corpus, consisting of approximately 140M tokens, and is provided on Matthew Mahoney’s website.4 We tokenized the corpus and used the 44k most frequent words in the training corpus for recurrent language models. 6 Results and Analysis Corpus-level Perplexity We evaluated the models, including all the proposed approaches (RLM{BoW,SeqBoW}-{ATT,∅}-{EF,LF}-n), on the IMDB corpus. In Fig. 2 (a), we see three major trends. First, RLM-BoW, either with the early fusion or late fusion, outperforms both the count-based n-gram and recurrent language model (LSTM) regardless of the number of context sentences. Second, the improvement grows as the number n of context sentences increases, and this is most visible with the novel late fusion. Lastly, 4http://mattmahoney.net/dc/textdata 1325 (i) RLM-BoW-LF (ii) RLM-SeqBoW-ATT-LF (a) IMDB (b) BBC (b) Penn Treebank Figure 3: Perplexity per POS tag on the (a) IMDB, (b) BBC and (c) Penn Treebank corpora. we see that the RLM-SeqBoW does not work well regardless of the fusion type (RLM-SeqBowEF not shown), while the attention-based model (RLM-SeqBow-ATT) outperforms all the others. After observing that the late fusion clearly outperforms the early fusion, we evaluated only RLM-{BoW,SeqBoW}-{ATT}-LF-n’s on the other two corpora. On the other two corpora, PTB and BBC, we observed a similar trend of RLM-SeqBoWATT-LF-n and RLM-BoW-LF-n outperforming the two conventional language models, and that this trend strengthened as the number n of the context sentences grew. We also observed again that the RLM-SeqBoW-ATT-LF outperforms RLMSeqBoW-LF and RLM-BoW in almost all the cases. Observing the benefit of RLM-SeqBoW-ATTLF, we evaluated only such model on Fil9 to validate its performance on large corpus. Similar to the results on all three previous corpora, we continue to observe the advantage of RLM-SeqBoWATT-LF-n on Fil9 corpus. From these experiments, the benefit of allowing larger context to a recurrent language model is clear, however, with the right choice of the context representation (see Sec. 3.1) and the right mechanism for feeding the context information to the recurrent language model (see Sec. 3.2.) In these experiments, the sequence of bag-of-words representation with attention mechanism, together with the late fusion was found to be the best choice in all four corpora. One possible explanation on the failure of the SeqBoW representation with a context recurrent neural network is that it is simply difficult for the context recurrent neural network to compress multiple sentences into a single vector. This difficulty in training a recurrent neural network to compress a long sequence into a single vector has been observed earlier, for instance, in neural machine translation (Cho et al., 2014a). Attention mechanism, which was found to avoid this problem in machine translation (Bahdanau et al., 2014), is found to solve this problem in our task as well. Perplexity per Part-of-Speech Tag Next, we attempted at discovering why the larger-context recurrent language model outperforms the unconditional one. In order to do so, we computed the perplexity per part-of-speech (POS) tag. We used the Stanford log-linear part-of-speech tagger (Stanford POS Tagger, Toutanova et al., 2003) to tag each word of each sentence in the corpora.5 We then computed the perplexity of each word and averaged them for each tag type separately. Among the 36 POS tags used by the Stanford POS Tagger, we looked at the perplexities of the ten most frequent tags (NN, IN, DT, JJ, RB, NNS, VBZ, VB, PRP, CC), of which we combined NN and NNS into a new tag Noun and VB and VBZ into a new tag Verb. We show the results using the RLM-BoWLF and RLM-SeqBoW-ATT-LF on three corpora– IMDB, BBC and Penn Treebank– in Fig. 3. We observe that the predictability, measured by the perplexity (negatively correlated), grows most for nouns (Noun) and adjectives (JJ) as the number of context sentences increases. They are followed by verbs (Verb). In other words, nouns, adjec5http://nlp.stanford.edu/software/ tagger.shtml 1326 IMDB BBC Penn TreeBank Fil9 # Sentences # Words # Sentences # Words # Sentences # Words # Sentences # Words Training 930,139 21M 37,207 890K 42,068 888K 6,619,098 115M Validation 152,987 3M 1,998 49K 3,370 70K 825,919 14M Test 151,987 3M 2,199 53K 3,761 79K 827,416 14M Table 1: Statistics of IMDB, BBC, Penn TreeBank and Fil9. tives and verbs are the ones which become more predictable by a language model given more context. We however noticed the relative degradation of quality in coordinating conjunctions (CC), determiners (DT) and personal pronouns (PRP). It is worthwhile to note that nouns, adjectives and verbs are open-class, content, words, and conjunctions, determiners and pronouns are closedclass, function, words (see, e.g., Miller, 1999). The functions words often play grammatical roles, while the content words convey the content of a sentence or discourse, as the name indicates. From this, we may carefully conclude that the largercontext language model improves upon the conventional, unconditional language model by capturing the theme of a document, which is reflected by the improved perplexity on “content-heavy” open-class words (Chung and Pennebaker, 2007). In our experiments, this came however at the expense of slight degradation in the perplexity of function words, as the model’s capacity stayed same (though, it is not necessary.) This observation is in line with a recent finding by Hill et al. (2015). They also observed significant gain in predicting open-class, or content, words when a question-answering model, including humans, was allowed larger context. 7 Conclusion In this paper, we proposed a method to improve language model on corpus-level by incorporating larger context. Using this model results in the improvement in perplexity on the IMDB, BBC, Penn Treebank and Fil9 corpora, validating the advantage of providing larger context to a recurrent language model. From our experiments, we found that the sequence of bag-of-words with attention is better than bag-of-words for representing the context sentences (see Sec. 3.1), and the late fusion is better than the early fusion for feeding the context vector into the main recurrent language model (see Sec. 3.2). Our part-of-speech analysis revealed that content words, including nouns, adjectives and verbs, benefit most from an increasing number of context sentences. This analysis suggests that larger-context language model improves perplexity because it captures the theme of a document better and more easily. To explore the potential of such a model, there are several aspects in which more research needs to be done. First, the four datasets we used in this paper are relatively small in the context of language modelling, therefore the proposed largercontext language model should be evaluated on larger corpora. Second, more analysis, beyond the one based on part-of-speech tags, should be conducted in order to better understand the advantage of such larger-context models. Lastly, it is important to evaluate the impact of the proposed largercontext models in downstream tasks such as machine translation and speech recognition. Acknowledgments This work is done as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University. KC thanks Facebook, Google (Google Faculty Award 2016) and NVidia (GPU Center of Excellence 2015–2016). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research 3:1137–1155. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on 5(2):157–166. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research 3:993–1022. 1327 Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder-decoder approaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8). Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Cindy Chung and James W Pennebaker. 2007. The psychological functions of function words. Social communication pages 343–359. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2015. Gated feedback recurrent neural networks. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). pages 2067–2075. Susan T Dumais. 2004. Latent semantic analysis. Annual review of information science and technology 38(1):188–230. Mikel L Forcada and Ram´on P ˜Neco. 1997. Recursive hetero-associative memories for translation. In Biological and Artificial Computation: From Neuroscience to Technology, Springer, pages 453–462. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . Derek Greene and P´adraig Cunningham. 2006. Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proc. 23rd International Conference on Machine learning (ICML’06). ACM Press, pages 377–384. Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn´ık, Bas R Steunebrink, and J¨urgen Schmidhuber. 2015. Lstm: A search space odyssey. arXiv preprint arXiv:1503.04069 . Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Sofia, Bulgaria, pages 690–696. Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. arXiv preprint arXiv:1506.03340 . Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 . Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document context language models. arXiv preprint arXiv:1511.03962 . Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). pages 2342–2350. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP. pages 1700–1709. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. arXiv preprint arXiv:1506.06726 . Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 142–150. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010. pages 1045– 1048. Tom´aˇs Mikolov, Stefan Kombrink, Luk´aˇs Burget, Jan Honza ˇCernock`y, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, pages 5528–5531. 1328 Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In SLT. pages 234–239. George A Miller. 1999. On knowing a word. Annual Review of psychology 50(1):1–19. Ronald Rosenfeld. 2000. Two decades of statistical language modeling: where do we go from here. In Proceedings of the IEEE. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by back-propagating errors. Cognitive modeling 5:3. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808 . Martin Sundermeyer, Hermann Ney, and Ralf Schluter. 2015. From feedforward to recurrent lstm neural networks for language modeling. Audio, Speech, and Language Processing, IEEE/ACM Transactions on 23(3):517–529. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Featurerich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics, pages 173–180. Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701 . 1329
2016
125
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1330–1340, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics The Creation and Analysis of a Website Privacy Policy Corpus Shomir Wilson1, Florian Schaub1, Aswarth Abhilash Dara1, Frederick Liu1, Sushain Cherivirala1, Pedro Giovanni Leon2, Mads Schaarup Andersen1, Sebastian Zimmeck3, Kanthashree Mysore Sathyendra1, N. Cameron Russell4, Thomas B. Norton4, Eduard Hovy1, Joel Reidenberg4 and Norman Sadeh1 1Carnegie Mellon University, School of Computer Science, Pittsburgh, PA, USA 2Stanford University, Center for Internet and Society, Stanford, CA, USA 3Columbia University, Department of Computer Science, New York, NY, USA 4Fordham University, Law School, New York, NY, USA [email protected], [email protected] Abstract Website privacy policies are often ignored by Internet users, because these documents tend to be long and difficult to understand. However, the significance of privacy policies greatly exceeds the attention paid to them: these documents are binding legal agreements between website operators and their users, and their opaqueness is a challenge not only to Internet users but also to policy regulators. One proposed alternative to the status quo is to automate or semi-automate the extraction of salient details from privacy policy text, using a combination of crowdsourcing, natural language processing, and machine learning. However, there has been a relative dearth of datasets appropriate for identifying data practices in privacy policies. To remedy this problem, we introduce a corpus of 115 privacy policies (267K words) with manual annotations for 23K fine-grained data practices. We describe the process of using skilled annotators and a purpose-built annotation tool to produce the data. We provide findings based on a census of the annotations and show results toward automating the annotation procedure. Finally, we describe challenges and opportunities for the research community to use this corpus to advance research in both privacy and language technologies. 1 Introduction Privacy policies written in natural language are a nearly pervasive feature of websites and mobile applications. The “notice and choice” legal regimes of many countries require that website operators post a notice of how they gather and process users’ information. In theory, users then choose whether to accept those practices or to abstain from using the website or service. In practice, however, the average Internet user struggles to understand the contents of privacy policies (McDonald and Cranor, 2008) and generally does not read them (Federal Trade Commission, 2012; President’s Concil of Advisors on Science and Technology, 2014). This disconnect between Internet users and the data practices that affect them has led to the assessment that the notice and choice model is ineffective in the status quo (Reidenberg et al., 2015b; Cate, 2010). Thus, an opening exists for language technologies to help “bridge the gap” between privacy policies in their current form and representations that serve the needs of Internet users. Such a bridge would also serve unmet needs of policy regulators, who do not have the means to assess privacy policies in large numbers. Legal text is a familiar domain for natural language processing, and the legal community has demonstrated some reciprocal interest (Mahler, 2015). However, the scale of the problem and its significance—i.e., to virtually any Internet user, as well as to website operators and policy regulators—distinguishes it and provides immense motivation (Sadeh et al., 2013). To this end, we introduce a corpus of 115 website privacy policies annotated with detailed information about the data practices that they describe.1 This information consists of 23K data practices, 128K practice attributes, and 103K annotated text spans, all produced by skilled anno1The dataset is available for download at www.usableprivacy.org/data. 1330 tators. To the best of our knowledge, this is the first large-scale effort to annotate privacy policies at such a fine level of detail. It exceeds prior efforts to annotate sentence-level fragments of policy text (Breaux and Schaub, 2014), answer simple overarching questions about privacy policy contents (Wilson et al., 2016; Zimmeck and Bellovin, 2014), or analyze the readability of privacy policies (Massey et al., 2013). We further present analysis that demonstrates the richness of the corpus and the feasibility of partly automating the annotation of privacy policies. The remainder of this paper is structured as follows. We discuss related work and contextualize the corpus we have created in Section 2. In Section 3 we describe the creation of the corpus, including the collection of a diverse set of policies and the creation of a privacy policy annotation tool. Section 4 presents analysis that illustrates the diversity and complexity of the corpus, and Section 5 shows results on the prediction of policy structure. Finally, in Section 6 we describe some promising avenues for future work. 2 Related Work Prior attempts on analyzing privacy policies focused largely on manually assessing their usability (Jensen and Potts, 2004) or compliance with self-regulatory requirements (Hoke et al., 2015). Breaux et al. proposed a description logic to analyze and reason about data sharing properties in privacy policies (2013), but rely on a small set of manually annotated privacy policies to instantiate their language. Automated assessments have largely focused on readability scores (Massey et al., 2013; Meiselwitz, 2013; Ermakova et al., 2015). Cranor et al. leveraged the standardized format of privacy notices in the U.S. financial industry to automatically analyze privacy polices of financial institutions (2013). However, in spite of notable efforts such as P3P (Wenning et al., 2006), the majority of privacy policies are unstructured and do not follow standardized formats. Costante et al. (2012) proposed a supervised learning approach to determine which data practice categories are covered in a privacy policy. Rule-based extraction techniques have been proposed to extract some of a website’s data collection practices from its privacy policy (Costante et al., 2013) or to answer certain binary questions about a privacy policy (Zimmeck and Bellovin, 2014). Other approaches leverage topic modeling (Chundi and Subramaniam, 2014; Stamey and Rossi, 2009) or sequence alignment techniques (Liu et al., 2014; Ramanath et al., 2014) to analyze privacy policies or identify similar policy sections and paragraphs. However, the complexity and vagueness of privacy policies makes it difficult to automatically extract complex data practices from privacy policies without substantial gold standard data. Crowdsourcing has been proposed as a potential approach to obtain annotations for privacy policies (Sadeh et al., 2013; Breaux and Schaub, 2014; Wilson et al., 2016). However, crowdworkers are not trained in understanding and interpreting legal documents, which may result in interpretation discrepancies compared to experts (Reidenberg et al., 2015a). Our policy annotation tool shares some common features with GATE (Bontcheva et al., 2013), although the interface for our tool is simpler to fit the specific requirements of the task. Few prior efforts, aside from those we cite above, have applied natural language processing to privacy policies or other legal documents purported for the general public to regularly read. More generally, legal text has a history of attention from natural language processing (Bach et al., 2013; Galgani et al., 2012; Francesconi et al., 2010) and from artificial intelligence (Sartor and Rotolo, 2013; Bench-Capon et al., 2012). Classifying legal text into categories has received some interest (ˇSavelka and Ashley, 2015; Mickevicius et al., 2015), as well as making the contents of legal texts more accessible (Boella et al., 2015; Curtotti and McCreath, 2013). Compared to prior efforts, our data set is notable for its combination of size, input from experts (for the label scheme) and skilled annotators (for the annotation procedure), and fine-grained detail. 3 Corpus Creation and Structure In this section we describe our procedure for selecting a diverse set of privacy policies, our annotation scheme, how we obtained annotations, and the structure of the corpus. 3.1 Privacy Policy Selection Privacy policies vary in length, complexity, legal sophistication, and coverage of services. For instance, privacy policies of large companies may cover multiple services, websites, apps, and even physical stores; such policies are often crafted by legal teams and frequently updated. Privacy poli1331 cies of smaller or less popular companies may have narrower focus or vary in employed language, and they may be updated less frequently. To reflect this diversity, we used a twostep process for policy selection: (1) relevancebased website pre-selection and (2) sector-based sub-sampling. First, we monitored Google Trends (Google, 2015) for one month (May 2015) to collect the top five search queries for each trend. Then, for each query we retrieved the first five websites listed on each of the first 10 pages of results. This process produced a diverse sample of 1,799 unique websites. Second, we sub-sampled from this website dataset according to DMOZ.org’s top-level website sectors.2 More specifically, we organized the dataset into 15 sectors (e.g., Arts, Shopping, Business, News). We excluded the “World” sector and limited the “Regional” sector to the “U.S.” subsector in order to ensure that all privacy policies in our corpus are subject to the same legal and regulatory requirements. We ranked the websites in each sector according to their frequency in the retrieved search results. Then we selected eight websites from each sector by randomly chosing two websites from each rank quartile. For each selected website, we manually verified that it had an English-language privacy policy and that it pertained to a US company (based on contact information and WHOIS entry) before downloading its privacy policy. Excluded websites were replaced with random re-draws from the same sector rank quartile. Some privacy policies covered more than one of the selected websites (e.g., the Disney privacy policy covered disney.go.com and espn.go.com), resulting in a final dataset of 115 privacy policies across 15 sectors. 3.2 Annotation Scheme and Process We developed a policy annotation scheme to capture the data practices specified by privacy policies. To ensure the scheme reflected actual policy contents, development occurred as an iterative refinement process, in which a small group of domain experts (privacy experts, public policy experts, and legal scholars) identified different data practice categories and their descriptive attributes from multiple privacy policies. The annotation scheme was then applied to additional policies and refined over multiple iterations during discussions 2The DMOZ.org website sectors are notable for their use by Alexa.com. among the experts. The final annotation scheme consists of ten data practice categories: 1. First Party Collection/Use: how and why a service provider collects user information. 2. Third Party Sharing/Collection: how user information may be shared with or collected by third parties. 3. User Choice/Control: choices and control options available to users. 4. User Access, Edit, & Deletion: if and how users may access, edit, or delete their information. 5. Data Retention: how long user information is stored. 6. Data Security: how user information is protected. 7. Policy Change: if and how users will be informed about changes to the privacy policy. 8. Do Not Track: if and how Do Not Track signals3 for online tracking and advertising are honored. 9. International & Specific Audiences: practices that pertain only to a specific group of users (e.g., children, Europeans, or California residents). 10. Other: additional sub-labels for introductory or general text, contact information, and practices not covered by the other categories. An individual data practice belongs to one of the ten categories above, and it is articulated by a category-specific set of attributes. For example, a User Choice/Control data practice is associated with four mandatory attributes (Choice Type, Choice Scope, Personal Information Type, Purpose) and one optional attribute (User Type). The annotation scheme defines a set of potential values for each attribute. To ground the data practice in the policy text, each attribute also may be associated with a text span in the privacy policy. The set of mandatory and optional attributes reflects the potential level of specificity with which a data practice of a given category may be described. Optional attributes are less common, while mandatory attributes are necessary to represent a data practice. However, privacy policies are often vague or ambiguous on many of these attributes. Therefore, a valid value for each attribute is Unspecified, allowing annotators to express an absence of information. 3www.w3.org/2011/tracking-protection 1332 Documents 115 Words 266,713 Annotated Data Practices 23,194 Annotated Attributes 128,347 Annotated Text Spans 102,576 Annotators Per Document 3 Annotators Total 10 Table 1: Totalized statistics on the corpus. We developed a web-based annotation tool, shown in Figure 1, for skilled annotators to apply our annotation scheme to the selected privacy policies.4 In preparation, privacy policies were divided into paragraph-length segments for annotators to read in the tool, one at a time in sequence. For each segment, an annotator may label zero or more data practices from each category. To create a data practice, an annotator first selects a practice category and then specifies values and text spans for each of its attributes. Annotators can see a list of data practices they have created for a segment and selectively duplicate and edit them to annotate practices that differ only slightly, though we omit these features from the figure for brevity. 4 Composition of the OPP-115 Corpus The annotation process produced a nuanced and diverse dataset, which we describe in detail below. We name the dataset the OPP-115 Corpus (Online Privacy Policies, set of 115) for convenience. 4.1 Policy Contents Table 1 shows some descriptive statistics for the corpus as a whole. Each privacy policy was read by three skilled annotators, who worked independently, and a total of ten annotators participated in the process. All annotators were law students and were compensated for their work at rates appropriate for student employees at their respective universities. They required a mean of 72 minutes per policy, though this number is slightly inflated by outliers when they stepped away from in-progress sessions for extended periods of time. The annotators produced a total of 23K data practices, although this number contains some redundancies between annotators’ efforts.5 In aggregate, these 4Our experts for the annotation scheme development and our skilled annotators were mutually exclusive groups. 5We describe a method to consolidate annotations (i.e., to eliminate redundancies between annotators’ data) in Section 4.2. Here, we analyze policy contents pre-consolidation to avoid propagating the effects of nontrivial assumptions necdata practices are associated with 128K values for attributes and 103K selected spans of policy text. Note that the annotation tool required the selection of a text span for mandatory attributes, but did not require a text-based justification for optional attributes or attributes marked as “Unspecified”. The corpus allows us to investigate the composition of typical privacy policies in terms of data practices. Privacy policies are known for their length and complexity, but those notions do not necessarily entail a density of pertinent information. Table 2 shows the pre-consolidation quantities of practices that we collected in each of the ten annotation categories, along with the mean and median counts of practices per privacy policy. Intuitively, First Party Collection/Use and Third Party Sharing/Collection dominated the rankings by frequency: the collection, usage, and sharing of user data are the primary concerns that compel the production of privacy policies. Data practices in the Other category, while frequent, were mostly statements that were ostensibly not about user data; 57% were introductory, contact, or generic information. Means were above medians for all categories, reflecting rightward skews for all the distributions. Table 2 also contains statistics on segment-level category coverage and annotator agreement. Here, coverage is meant in an ipso facto sense: a practice category covers a policy segment if two of three annotators each identified at least one practice from that category in the segment text. Differences in the category rankings by frequency and by coverage reveal that practices in some categories are less tightly clustered than others. In particular, Data Retention is the second rarest practice category but ranks fourth by segment coverage. Since Kappa is applied here to an artificial task (annotators were not asked to label entire segments) the common conventions for its interpretation (Carletta, 1996; Viera and Garrett, 2005) are not directly applicable. However, Do Not Track and International and Specific Audiences remain standout categories with the greatest segment-level agreement. We hypothesize that these two categories have the most easily recognizable cues for annotation. Do Not Track practices, for example, are associated with the eponymous phrase. Finally, the pre-consolidation mean and median essary for consolidation. 1333 Figure 1: Web-based tool for our skilled annotators of privacy policies. Category Freq. Mean Median Coverage Fleiss’ Kappa First Party Collection/Use 8,956 78 74 .27 .76 Third Party Sharing/Collection 5,230 45 39 .21 .76 Other 3,551 31 25 .24 .49 User Choice/Control 1,791 16 13 .08 .61 Data Security 1,009 9 7 .05 .67 International and Specific Audiences 941 8 6 .07 .87 User Access, Edit and Deletion 747 6 5 .03 .74 Policy Change 550 5 4 .03 .73 Data Retention 370 3 2 .20 .55 Do Not Track 90 1 0 .01 .91 Table 2: By-category descriptive statistics for the data practices in the corpus. These statistics are calculated prior to consolidating multiple annotators’ work. Means and medians are calculated across the population of policies in the corpus. Coverage and Kappa are calculated in terms of by-segment contents. 0 5 10 15 20 25 600 550 500 450 400 350 300 250 200 150 100 <50 Frequency Data Practices Figure 2: Distribution of data practices per policy. quantities of data practices per policy were 202 and 200, respectively. These do not correspond to columnar totals that may be calculated from Table 2 because the categories were not equally distributed among the privacy policies. Figure 2 shows the distribution of quantities of data practices per policy. The distribution exhibits a skew toward larger numbers of data practices per policy. Importantly, differences in the number of data practices should not be interpreted as varying levels of data protection or privacy. A privacy policy that contains many data practices may exhibit substantial redundancy among them, and a privacy policy with relatively few data practices could merely be concise. In either case, the data practices may be responsive to users concerns or at odds with them. 1334 4.2 Consolidating Annotators’ Work In this section we discuss the problem of consolidation, or merging data practices from multiple annotators if those practices refer to the same underlying practice expressed by the text. The ambiguity and vagueness of privacy policies (Reidenberg et al., 2016) and the sophistication of the annotation scheme are natural limitations on annotator agreement. With that in mind, we present a consolidation procedure to collapse redundant annotations with the proviso that practices labeled by only one or two skilled annotators also have substantial value and merit retention. First, we institute some basic requirements about locality and topicality to determine which data practices are eligible for consolidation. Given a segment, if annotators A1, ..., An (for n = 2 or n = 3 in our dataset) respectively produce sets of data practices P1, ..., Pn, then a selection of data practices p1 ∈P1, ..., pn ∈Pn is eligible to be consolidated into a single data practice only if all of them belong to the same category. Additionally, three implicit assumptions in this requirement are that (1) at least two annotators contribute practices to a consolidation set, (2) all the practices are located in the same policy segment, and (3) each practice must belong to a unique annotator. For each segment we create an exhaustive list of eligible combinations of data practices to consolidate, score and rank each combination using a method detailed below, prune the list with a score threshold, and finally perform consolidations in order of ranking until no further consolidations are possible. Consolidation sets containing three annotators’ practices are considered prior to sets containing practices from only two annotators. The data practices in a chosen consolidation set are removed and replaced by a single “master” data practice. To do this, it is necessary to merge sets of values and sets of text spans respectively associated with each attribute. Sets of values are merged using a majority vote if possible and set to Unspecified if otherwise; the latter case occurs in approximately a third of all mergers. Sets of text spans are merged with a strong bias toward recall, by creating a new text span that begins and ends with respectively the first and last indexes in the set. Our scoring method is based on the summative overlap between the sets of text spans associated with attributes of data practices, with normalization to account for longer spans. Since the text                          Figure 3: Consolidation threshold value versus the average number of data practices per segment produced by consolidation. spans connect data practices to the policy text, we use their overlaps as evidence that two annotators’ data practices refer to the same underlying practice in the text. Thus, a score for two data practices that are associated with roughly the same policy text is relatively high, and a score for two data practices that are associated with different text is low. Figure 3 shows the effect of the consolidation threshold on the average number of practices produced by consolidation per policy segment (i.e., excluding those original data practices that were retained because they were not subject to consolidation). Past a threshold value of approximately 0.2, the number of practices steadily decreases. Notably the average number of practices produced by consolidation is substantially less than the average practices per annotator per segment (2.04) at any point on the curve, indicating a relative lack of agreement between annotators in terms of text span selections. As part of the corpus, we release consolidated datasets at threshold values of 0.5, 0.75, and 1. 4.3 Data Exploration Website The data practice annotations are difficult for human readers to interpret without visual connections to the policy text. To help researchers, policy regulators, and the general public understand the structure and utility of the data set, we created a data exploration website6 that visually integrates the data practice annotations with the texts of privacy policies. Site interactivity allows users to search for websites in the dataset or browse by DMOZ sectors. The website also allows users to compare privacy policies by categorical structure, data prac6explore.usableprivacy.org 1335 Figure 4: Comparing five policies on the data exploration website. tice quantities, and reading level.7 Figure 4 shows a sample comparison between five websites. Each policy’s segments are depicted in order from left to right. Segments are colored according to the practice categories that annotators labeled within them. Qualitative patterns are discernible; for example, several of these policies have large blocks of First Party Collection/Use toward the beginning or large blocks of Third Party Sharing/Collection further inward. We discuss exploiting such recurring structures in the next section. 5 Prediction of Policy Structure The current human annotation procedure is impractical for covering the entire Internet or accounting for changes in privacy policies. This raises the question of whether the process can be partly automated. In this section we describe our experiments to automatically assign category labels to policy segments, which would enable the simplification of the annotation task. 5.1 Experiments Our dataset consisted of 3,792 segments from 115 privacy policies. We represented the text of each segment as a dense vector using Paragraph2Vec (Le and Mikolov, 2014) and the GENSIM toolkit ( ˇReh˚uˇrek and Sojka, 2010). This approach exploited semantic similarities between words in the vocabulary of privacy policies, acknowledging that the vocabulary in this domain is specialized but not completely standardized. We assigned each policy segment a binary vector of categoryspecific labels, with each element in the vector corresponding to the presence or absence of a data practice category in the segment. We considered a vector with twelve elements, with nine of them 7explore.usableprivacy.org/compare coming from existing practice categories (all except Other). The remaining three came from elevating three attributes of Other to category status: Introductory/Generic, Practice Not Covered, and Privacy Contact Information. We created gold standard data for this problem using a simplified consolidation approach: if two or more annotators agreed that a category is present in a segment, then we labeled that segment with the category. To predict the category labels of privacy policy segments, we tried three approaches. Two were logistic regression and SVM models, for which we treated this as a multi-class classification problem. Since 212 unique category vectors exist, we trimmed the label space to only those that occur in the training set. The third was a sequence labeling approach inspired by prior work to apply hidden Markov models (HMMs) to privacy policy text (Ramanath et al., 2014). Our work differs from this prior work by using labels from an annotation scheme constructed by privacy experts rather than topics developed from an unsupervised method. Additionally, in our formulation, each hidden state corresponds to one of the unique binary vectors that represent classes of category combinations in the training data. The HMM’s transition probabilities capture the tendency of privacy policy authors to organize topics (i.e., practice categories in our annotation scheme) in similar sequences. Since each segment is represented by a unique real-valued vector from Paragraph2Vec, it was not possible to directly obtain an emission probability distribution from the training data. Therefore, we ran the K-Means++ algorithm using the scikit-learn toolkit (Pedregosa et al., 2011) on the segment vector representations and assigned each segment to a cluster. The emission probability distribution then captured the tendencies of a given class and generated the segment 1336 that is represented as a cluster. These two distributions are estimated empirically from the training data, and we used Viterbi decoding to obtain the best labeling sequence during the prediction. 5.2 Results We split the set of 115 policies into subsets of 75 for training and 40 for testing. The number of clusters in the HMM approach8 is set to 100 and the results are shown in Table 3 as means across 10 runs. The standard deviations for these performance figures are generally between 0.01 and 0.05; the one exception is Do Not Track (the least frequent category) with a standard deviation of 0.2. As the table shows, although the HMM does not reach the same performance as SVM, it performs similarly to logistic regression and meets or exceeds its F1score for five categories. We interpret the strength of the SVM as indicating the strong potential to partly automate the policy labeling procedure, especially for two categories: First Party Collection/Use (a standout performance and the category for which the most labeled data exists) and Do Not Track (a perfect performance, likely due to the limited vocabulary used to describe practices in this category). Additionally, while the HMM did not perform as well overall, we note that its micro-average F1 was a slight improvement over logistic regression. With relatively little data to train this HMM, we expect that the accumulation of more labeled instances can improve its performance substantially. 6 Future Directions The OPP-115 Corpus enables research in several directions of interest to natural language processing and usable privacy. We sketch some opportunites for future work below. A central challenge for this research is the scalability of policy annotation. Although it was necessary to annotate the first 115 policies manually, to ensure the annotations were responsive to the annotation scheme, a less labor-intensive approach will be required for large-scale Web coverage. The OPP-115 Corpus is a valuable dataset for this move toward automated methods. Additionally, a strong potential exists for a combination of automated annotation of coarse information and human annotation of finer details. For 8We tuned the parameters of the HMM approach and SVM after performing a five-fold cross validation on the training data. example, automated category labeling of policy segments is feasible, as demonstrated in Section 5. Asking a human to label practices in a single category would be a reduction in effort, especially if they are shown text that is relevant to the category. Crowdsourcing also becomes a possibility when the complexity of the task is reduced. An ambitious goal will be to eliminate human annotators altogether. Our preliminary analysis has shown that the policy vocabularies associated with certain annotations are very distinctive (e.g., the Do Not Track category or financial information as a data type, for example), lending themselves to automatic identification. By producing confidence ratings alongside data practice predictions, an automated system could mitigate its shortcomings. Separately, data practices must be presented to Internet users in a way that is responsive to their concerns. Text summarization is a possibility, using the annotations as a guide for important details to retain. Internet users have already demonstrated limited patience with text-based privacy policies, which adds a nuance to this challenge and suggests the need for a combination of text and pictorial representations (or chiefly pictorial representations) to communicate data practices (Schaub et al., 2015). Additional questions of interest include: • How can the data practice annotations for a policy be combined into a cohesive interpretation? The relationships between data practices are not straightforward. Vagueness, contradictions, and unclear scope are all problems for constructing a knowledge base of them. • How can the balance between human and automated methods for annotation be optimized? The model for the ideal combination is subject to parameters such as the availability of resources and the necessary level of confidence for annotations. • How can sectoral norms and outliers be identified automatically? A bank website that collects users’ health information, for example, deserves scrutiny. It seems appropriate to address this question with clustering techniques, using features from the data practices and from the policy text. 1337 LR SVM HMM Category P R F P R F P R F First Party Collection/Use 0.73 0.67 0.70 0.76 0.73 0.75 0.69 0.76 0.72 Third Party Sharing/Collection 0.64 0.63 0.63 0.67 0.73 0.70 0.63 0.61 0.62 User Choice/Control 0.45 0.62 0.52 0.65 0.58 0.61 0.47 0.33 0.39 Introductory/Generic* 0.51 0.50 0.50 0.58 0.49 0.53 0.54 0.49 0.51 Data Security 0.48 0.75 0.59 0.66 0.67 0.67 0.67 0.53 0.59 Internat’l and Specific Audiences 0.49 0.69 0.57 0.70 0.70 0.70 0.67 0.66 0.66 Privacy Contact Information* 0.34 0.72 0.46 0.60 0.68 0.64 0.48 0.59 0.53 User Access, Edit, and Deletion 0.47 0.71 0.57 0.67 0.56 0.61 0.48 0.42 0.45 Practice Not Covered* 0.20 0.47 0.28 0.19 0.26 0.22 0.15 0.12 0.13 Policy Change 0.59 0.83 0.69 0.66 0.88 0.75 0.52 0.68 0.59 Data Retention 0.10 0.35 0.16 0.12 0.12 0.12 0.08 0.12 0.09 Do Not Track 0.45 1.0 0.62 1.0 1.0 1.0 0.45 0.40 0.41 Micro-Average 0.53 0.65 0.58 0.66 0.66 0.66 0.60 0.59 0.60 Table 3: Precision/Recall/F1 for the three models. The three starred categories resulted from the decomposition of the original Other category, which is excluded here. Categories are ordered in this table in descending order by frequency in the dataset. 7 Conclusion We have described the motivation, creation, and analysis of a unique corpus of 115 privacy policies and 23K fine-grained data practice annotations, and we have demonstrated the feasibility of partly automating the annotation process. The annotations reveal the structure and complexity of these documents, which Internet users are expected to understand and accept. This corpus should serve as a resource for language technologies research to help Internet users understand the privacy practices of businesses and other entities that they interact with online. Acknowledgements This work is funded by the National Science Foundation under grants CNS-1330596 and CNS-1330214. The authors would like to acknowledge the law students at Fordham University and the University of Pittsburgh who worked as annotators to make this corpus possible. The authors also wish to acknowledge all members of the Usable Privacy Policy Project (www.usableprivacy.org) for their contributions. References Ngo Xuan Bach, Nguyen Le Minh, Tran Thi Oanh, and Akira Shimazu. 2013. A two-phase framework for learning logical structures of paragraphs in legal articles. ACM Transactions on Asian Language Information Processing (TALIP), 12(1):3:1–3:32, March. Trevor Bench-Capon, Michał Araszkiewicz, Kevin Ashley, Katie Atkinson, Floris Bex, Filipe Borges, Daniele Bourcier, Paul Bourgine, Jack G Conrad, Enrico Francesconi, et al. 2012. A History of AI and Law in 50 Papers: 25 Years of the International Conference on AI and Law. Artificial Intelligence and Law, 20(3):215–319. Guido Boella, Luigi Di Caro, Michele Graziadei, Loredana Cupi, Carlo Emilio Salaroglio, Llio Humphreys, Hristo Konstantinov, Kornel Marko, Livio Robaldo, Claudio Ruffini, et al. 2015. Linking legal open data: Breaking the accessibility and language barrier in European legislation and case law. In Proceedings of the 15th International Conference on Artificial Intelligence and Law, ICAIL ’15, pages 171–175. ACM. Kalina Bontcheva, Hamish Cunningham, Ian Roberts, Angus Roberts, Valentin Tablan, Niraj Aswani, and Genevieve Gorrell. 2013. GATE teamware: a web-based, collaborative text annotation framework. Language Resources and Evaluation, 47(4):1007– 1029. Travis D. Breaux and Florian Schaub. 2014. Scaling requirements extraction to the crowd. In RE’14: Proceedings of the 22nd IEEE International Requirements Engineering Conference, RE’ 14, pages 163–172, Washington, DC, USA, August. IEEE Society Press. Travis D. Breaux, Hanan Hibshi, and Ashwini Rao. 2013. Eddy, a formal language for specifying and analyzing data flow specifications for conflicting privacy requirements. Requirements Engineering, 19(3):281–307, September. 1338 Jean Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249–254, June. F.H. Cate. 2010. The limits of notice and choice. IEEE Security Privacy, 8(2):59–62, March. Parvathi Chundi and Pranav M. Subramaniam. 2014. An approach to analyze web privacy policy documents. In KDD Workshop on Data Mining for Social Good. Elisa Costante, Yuanhao Sun, Milan Petkovi´c, and Jerry den Hartog. 2012. A machine learning solution to assess privacy policy completeness. In Proceedings of the 2012 ACM Workshop on Privacy in the Electronic Society, WPES ’12, pages 91–96. ACM. Elisa Costante, Jerry den Hartog, and Milan Petkovi´c. 2013. What websites know about you: Privacy policy analysis using information extraction. In Roberto Di Pietro, Javier Herranz, Ernesto Damiani, and Radu State, editors, Data Privacy Management and Autonomous Spontaneous Security, volume 7731 of Lecture Notes in Computer Science, pages 146–159. Springer. Lorrie Faith Cranor, Kelly Idouchi, Pedro Giovanni Leon, Manya Sleeper, and Blase Ur. 2013. Are they actually any different? Comparing thousands of financial institutions’ privacy practices. In Workshop on the Economics of Information Security, WEIS ’13, June. Michael Curtotti and Eric McCreath. 2013. A right to access implies a right to know: An open online platform for research on the readability of law. J. Open Access L., 1:1–56. Tatiana Ermakova, Benjamin Fabian, and Eleonora Babina. 2015. Readability of privacy policies of healthcare websites. In 12. Internationale Tagung Wirtschaftsinformatik (Wirtschaftsinformatik 2015). Federal Trade Commission. 2012. Protecting consumer privacy in an era of rapid change: Recommendations for businesses and policymakers. http://www.ftc.gov/reports. Last accessed: June 22, 2016. Enrico Francesconi, Simonetta Montemagni, Wim Peters, and Daniela Tiscornia. 2010. Semantic processing of legal texts: Where the language of law meets the law of language, volume 6036 of Lecture Notes in Artificial Intelligence. Springer. Filippo Galgani, Paul Compton, and Achim Hoffmann. 2012. Combining different summarization techniques for legal text. In Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, HYBRID ’12, pages 115– 123. ACL. Google. 2015. Google Trends. https://www. google.com/trends/. Last accessed: June 22, 2016. Candice Hoke, Lorrie Cranor, Pedro Leon, and Alyssa Au. 2015. Are they worth reading? An in-depth analysis of online tracker’s privacy policies. I/S: A Journal of Law and Policy for the Information Society, 11(2):325–404, April. Carlos Jensen and Colin Potts. 2004. Privacy policies as decision-making tools: An evaluation of online privacy notices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, pages 471–478. ACM. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053. Fei Liu, Rohan Ramanath, Norman Sadeh, and Noah A. Smith. 2014. A step towards usable privacy policy: Automatic alignment of privacy statements. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, COLING ’14, pages 884–894. ANLP, August. Lars Mahler. 2015. What is NLP and why should lawyers care? http://www. lawpracticetoday.org/article/ nlp-lawyers/, February. Last accessed: June 22, 2016. Aaron K. Massey, Jacob Eisenstein, Annie I. Ant´on, and Peter P. Swire. 2013. Automated text mining for requirements analysis of policy documents. In 21st IEEE International Requirements Engineering Conference, RE ’13, pages 4–13. IEEE. Aleecia M. McDonald and Lorrie Faith Cranor. 2008. The cost of reading privacy policies. I/S: J Law & Policy Info. Soc., 4(3):540–561. Gabriele Meiselwitz. 2013. Readability assessment of policies and procedures of social networking sites. In A. Ant Ozok and Panayiotis Zaphiris, editors, 5th International conference, OCSC 2013, Held as Part of HCI International 2013, Lecture Notes in Computer Science, pages 67–75. Springer, January. Vytautas Mickevicius, Tomas Krilavicius, and Vaidas Morkevicius. 2015. Classification of short legal Lithuanian texts. In The 5th Workshop on BaltoSlavic Natural Language Processing, BSNLP ’15, pages 106–111. ACM. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. President’s Concil of Advisors on Science and Technology. 2014. Big data and privacy: a technological perspective. Report to the President, Executive Office of the President, May. 1339 Rohan Ramanath, Fei Liu, Norman Sadeh, and Noah A. Smith. 2014. Unsupervised alignment of privacy policies using hidden markov models. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, ACL ’14, pages 605–610. ACL, June. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45– 50. ELRA, May. Joel R. Reidenberg, Travis Breaux, Lorrie Faith Cranor, Brian French, Amanda Grannis, James T. Graves, Fei Liu, Aleecia McDonald, Thomas B. Norton, Rohan Ramanath, N. Cameron Russell, Norman Sadeh, and Florian Schaub. 2015a. Disagreeable privacy policies: Mismatches between meaning and users’ understanding. Berkeley Technology Law Journal, 30(1):39–88. Joel R. Reidenberg, N. Cameron Russell, Alexander J. Callen, Sophia Qasir, and Thomas B. Norton. 2015b. Privacy harms and the effectiveness of the notice and choice framework. I/S: J Law & Policy Info. Soc., 11(2). Joel R. Reidenberg, Jaspreet Bhatia, Travis Breaux, and Thomas B. Norton. 2016. Automated comparisons of ambiguity in privacy policies and the impact of regulation. Social Science Research Network Working Paper Series, January. Norman Sadeh, Alessandro Acquisti, Travis D. Breaux, Lorrie Faith Cranor, Aleecia M. Mcdonald, Joel R. Reidenberg, Noah A. Smith, Fei Liu, N. Cameron Russell, Florian Schaub, and Shomir Wilson. 2013. The usable privacy policy project: Combining crowdsourcing, machine learning and natural language processing to semi-automatically answer those privacy questions users care about. Technical Report CMU-ISR-13-119, Carnegie Mellon University. Giovanni Sartor and Antonino Rotolo. 2013. AI and law. In Agreement Technologies, pages 199–207. Springer. Jarom´ır ˇSavelka and Kevin D Ashley. 2015. Transfer of predictive models for classification of statutory texts in multi-jurisdictional settings. In Proceedings of the 15th International Conference on Artificial Intelligence and Law, ICAIL ’15, pages 216–220. ACM. Florian Schaub, Rebecca Balebako, Adam L. Durity, and Lorrie Faith Cranor. 2015. A design space for effective privacy notices. In Eleventh Symposium On Usable Privacy and Security, SOUPS ’15, pages 1–17. USENIX Association, July. John W. Stamey and Ryan A. Rossi. 2009. Automatically identifying relations in privacy policies. In 27th ACM International Conference on Design of Communication, SIGDOC ’09, pages 233–238. ACM. Anthony J. Viera and Joanne M. Garrett. 2005. Understanding interobserver agreement: the kappa statistic. Family Medicine, 37(5):360–363, 5. Rigo Wenning, Matthias Schunter, Lorrie Cranor, B. Dobbs, S. Egelman, G. Hogben, J. Humphrey, M. Langheinrich, M. Marchiori, M. PreslerMarshall, J. Reagle, and D. A. Stampley. 2006. The platform for privacy preferences 1.1 (P3P 1.1) specification. Shomir Wilson, Florian Schaub, Rohan Ramanath, Norman Sadeh, Fei Liu, Noah A. Smith, and Frederick Liu. 2016. Crowdsourcing annotations for websites’ privacy policies: Can it really work? In Proceedings of the 25th World Wide Web Conference, WWW ’13, pages 133–143. Sebastian Zimmeck and Steven M. Bellovin. 2014. Privee: An architecture for automatically analyzing web privacy policies. In 23rd USENIX Security Symposium, USENIX Security ’14, pages 1–16. USENIX Association, August. 1340
2016
126
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1341–1350, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Sequence-based Structured Prediction for Semantic Parsing Chunyang Xiao and Marc Dymetman Xerox Research Centre Europe 6-8 Chemin de Maupertuis Meylan, 38240, France chunyang.xiao, marc.dymetman @xrce.xerox.com Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-54500, France [email protected] Abstract We propose an approach for semantic parsing that uses a recurrent neural network to map a natural language question into a logical form representation of a KB query. Building on recent work by (Wang et al., 2015), the interpretable logical forms, which are structured objects obeying certain constraints, are enumerated by an underlying grammar and are paired with their canonical realizations. In order to use sequence prediction, we need to sequentialize these logical forms. We compare three sequentializations: a direct linearization of the logical form, a linearization of the associated canonical realization, and a sequence consisting of derivation steps relative to the underlying grammar. We also show how grammatical constraints on the derivation sequence can easily be integrated inside the RNNbased sequential predictor. Our experiments show important improvements over previous results for the same dataset, and also demonstrate the advantage of incorporating the grammatical constraints. 1 Introduction Learning to map natural language utterances (NL) to logical forms (LF), a process known as semantic parsing, has received a lot of attention recently, in particular in the context of building QuestionAnswering systems (Kwiatkowski et al., 2013; Berant et al., 2013; Berant and Liang, 2014). In this paper, we focus on such a task where the NL question may be semantically complex, leading to a logical form query with a fair amount of compositionality, in a spirit close to (Pasupat and Liang, 2015). Given the recently shown effectiveness of RNNs (Recurrent Neural Networks), in particular Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997), for performing sequence prediction in NLP applications such as machine translation (Sutskever et al., 2014) and natural language generation (Wen et al., 2015), we try to exploit similar techniques for our task. However we observe that, contrary to those applications which try to predict intrinsically sequential objects (texts), our task involves producing a structured object, namely a logical form that is tree-like by nature and also has to respect certain a priori constraints in order to be interpretable against the knowledge base. In our case, building on the work “Building a Semantic Parser Overnight” (Wang et al., 2015), which we will refer to as SPO, the LFs are generated by a grammar which is known a priori, and it is this grammar that makes explicit the structural constraints that have to be satisfied by the LFs. The SPO grammar, along with generating logical forms, generates so-called “canonical forms” (CF), which are direct textual realizations of the LF that, although they are not “natural” English, transparently convey the meaning of the LF (see Fig. 1 for an example). Based on this grammar, we explore three different ways of representing the LF structure through a sequence of items. The first one (LF Prediction, or LFP), and simplest, consists in just linearizing the LF tree into a sequence of individual tokens; the second one (CFP) represents the LF through its associated CF, which is itself a sequence of words; and finally the third one (DSP) represents the LF through a derivation sequence (DS), namely the sequence of grammar rules that were chosen to produce this LF. We then predict the LF via LSTM-based models that take as input the NL question and map it into 1341 NL: article published in 1950 CF: article whose publication date is 1950 LF: get[[lambda,s,[filter,s,pubDate,=,1950]],article] DT: s0(np0 (np1 (typenp0), cp0 (relnp0, entitynp0)) DS: s0 np0 np1 typenp0 cp0 relnp0 entitynp0 Figure 1: Example of natural language utterance (NL) from the SPO dataset and associated representations considered in this work. CF: canonical form, LF: logical form, DT: derivation tree, DS: derivation sequence. one of the three sequentializations. In the three cases, the LSTM predictor cannot on its own ensure the grammaticality of the predicted sequence, so that some sequences do not lead to well-formed LFs. However, in the DSP case (in contrast to LFP and CFP), it is easy to integrate inside the LSTM predictor local constraints which guarantee that only grammatical sequences will be produced. In summary, the contribution of our paper is twofold. Firstly, we propose to use sequence prediction for semantic parsing. Our experimental results show some significant improvements over previous systems. Secondly, we propose to predict derivation sequences taking into account grammatical constraints and we show that the model performs better than sequence prediction models not exploiting this knowledge. These results are obtained without employing any reranking or linguistic features such as POS tags, edit distance, paraphrase features, etc., which makes the proposed methodology even more promising. 2 Background on SPO The SPO paper (Wang et al., 2015) proposes an approach for quickly developing semantic parsers for new knowledge bases and domains when no training data initially exists. In this approach, a small underlying grammar is used to generate canonical forms and pair them with logical forms. Crowdsourcing is then used to paraphrase each of these canonical forms into several natural utterances. The crowdsourcing thus creates a dataset (SPO dataset in the sequel) consisting of (NL, CF, LF) tuples where NL is a natural language question with CF and LF the canonical and the logical form associated with this question. SPO learns a semantic parser on this dataset by firstly learning a log-linear similarity model based on a number of features (word matches, ppdb matches, matches between semantic types and POSs, etc.) between NL and the corresponding (CF, LF) pair. At decoding time, SPO parses a natural utterance NL by searching among the derivations of the grammar for one for which the projected (CF, LF) is most similar to the NL based on this log-linear model. The search is based on a so-called “floating parser” (Pasupat and Liang, 2015), a modification of a standard chart-parser, which is able to guide the search based on the similarity features. In contrast, our approach does not search among the derivations for the one that maximizes a match with the NL, but instead directly tries to predict a decision sequence that can be mapped to the LF. The SPO system together with its dataset were released to the public1 and our work exploits this release. 3 Approach 3.1 Grammars and Derivations s0: s(S) →np(S). np0: np(get[CP,NP]) →np(NP), cp(CP). np1: np(NP) →typenp(NP). cp0: cp([lambda,s,[filter,s,RELNP,=,ENTNP]]) → [whose], relnp(RELNP), [is], entitynp(ENTNP). ... typenp0: typenp(article) →[article]. relnp0: relnp(pubDate) →[publication, date] entitynp0: entitynp(1950) →[1950]. ... Figure 2: Some general rules (top) and domainspecific rules (bottom) in DCG format. The core grammatical resource released by SPO is a generic grammar connecting logical forms with canonical form realizations. They also provide seven domain-specific lexica that can be used in combination with the generic grammar to obtain domain-specific grammars which generate (LF, CF) pairs in each domain, in such a way that LF can then be used to query the corresponding knowledge base. While SPO also released a set of 1https://github.com/percyliang/sempre 1342 s0 np0 np1 cp0 typenp0 relnp0 entitynp0 Figure 3: A derivation tree. Its leftmost derivation sequence is [s0, np0, np1, typenp0, cp0, relnp0, entitynp0]. typenp0 article article relnp0 publication date pubDate entitynp0 1950 1950 cp0 whose publication date is 1950 [lambda,s,[filter,s,pubDate,=,1950] np1 article article np0 article whose publication date is 1950 get[[lambda,s,[filter,s,pubDate,=,1950]],article] s0 article whose publication date is 1950 get[[lambda,s,[filter,s,pubDate,=,1950]],article] Figure 4: Projection of the derivation tree nodes into (i) a canonical form and (ii) a logical form. Java-based parsers and generators for these grammars, for our own purposes we found it convenient to translate the grammars into the formalism of Definite Clause Grammars (Pereira and Warren, 1980), a classical unification-based extension of CFGs, which — through a standard Prolog interpreter such as SWIPL2 — provide direct support for jointly generating textual realizations and logical forms and also for parsing text into logical forms; we found this translation process to be rather straightforward and we were able to cover all of the SPO grammars. Figure 2 lists a few DCG rules, general rules first, then lexical rules, for the SPO “publications” domain. Nonterminals are indicated in bold, terminals in italics. We provide each rule with a unique identifier (e.g. s0, np0, ...), which is obtained by concatenating the name of its head nonterminal with a position number relative to the rules that may expand this nonterminal; we can then consider that the nonterminal (e.g. np) is the “type” of all its expanding rules (e.g. np0, np1, ...). According to standard DCG notation, upper2http://www.swi-prolog.org/ case items S, NP, CP, RELNP, ENTNP denote unification variables that become instantiated during processing. In our case unificaion variables range over logical forms and each nonterminal has a single argument denoting a partially instantiated associated logical form. For instance, in the cp0 rule, relnp is associated with the logical form RELNP, entitynp with the logical form ENTNP, and the LHS nonterminal cp is then associated with the logical form [lambda, s, [filter, s, RELNP, =, ENTNP]].3 In Figure 3, we display a derivation tree DT (or simply derivation) relative to this grammar, where each node is labelled with a rule identifier. This tree projects on the one hand onto the canonical form article whose publication date is 1950, on the other hand onto the logical form get[[lambda,s,[filter,s, pubDate,=,1950]],article]. Figure 4 shows how these projections are obtained by bottom-up composition. For instance, the textual projection of node cp0 is obtained from the textual representations of nodes relnp0 and entitynp0, according to the RHS of the rule cp0, while its logical form projection is obtained by instantiation of the variables RELNP and ENTNP respectively to the LFs associated with relnp0 and entitynp0. Relative to these projections, one may note a fundamental difference between derivation trees DT and their projections CF and LF: while the well-formedness of DT can simply be assessed locally by checking that each node expansion is valid according to the grammar, there is in principle no such easy, local, checking possible for the canonical or the logical form; in fact, in order to check the validity of a proposed CF (resp. LF), one needs to search for some DT that projects onto this CF (resp LF). The first process, of course, is known as “parsing”, the second process as “generation”. While parsing has polynomial complexity for grammars with a context-free backbone such as the ones considered here, deciding whether a logical form is well-formed or not could in principle be undecidable for certain forms of LF composition.4 3This logical form is written here in DCG list notation; in the more “Lispian” format used by SPO, it would be written (lambda s (filter s RELNP = ENTNP)). 4The term ‘projection’ is borrowed from the notion of bimorphism in formal language theory (Shieber, 2014) and refers in particular to the fact that the overall logical form is constructed by bottom-up composition of logical forms asso1343 To be able to leverage sequence prediction models, we can associate with each derivation tree DT its leftmost derivation sequence DS, which corresponds to a preorder traversal of the tree. For the tree of Figure 3, this sequence is [s0, np0, np1, typenp0, cp0, relnp0, entitynp0]. When the grammar is known (in fact, as soon as the CFG core of the grammar is known), two properties of the DS hold (we omit the easy algorithms underlying these properties; they involve using a prefix of the DS for constructing a partial derivation tree in a top-down fashion): 1. knowing the DS uniquely identifies the derivation tree. 2. knowing a prefix of the DS (for instance [s0, np0, np1, typenp0]) completely determines the type of the next item (here, this type is cp). The first property implies that if we are able to predict DS, we are also able to predict DT, and therefore also LF and CF. The second property implies that the sequential prediction of DS is strongly constrained by a priori knowledge of the underlying grammar: instead of having to select the next item among all the possible rules in the grammar, we only have to select among those rules that are headed by a specific nonterminal. Under a simple condition on the grammar (namely that there are no “unproductive” rules, rules that can never produce an output5), following such constrained selection for the next rule guarantees that the derivation sequence will always lead to a valid derivation tree. At this point, a theoretical observation should be made: there is no finite-state mechanism on the sequence of rule-names that can control whether the next rule-name is valid or not.6 The relevance of that observation for us is that the RNNs that we use are basically finite-state devices (with a huge number of states, but still finite-state), and therefore we do not expect them in principle to be able ciated with lower nodes in the derivation tree. In our DCG grammars, this composition actually involves more complex operations (such as “beta-reduction”) than the simple copyings illustrated in the small excerpt of Fig. 2. 5The general grammar ensures a good coverage of possible logical and canonical forms. However, when this general grammar is used in particular domains, some rules are not relevant any more (i.e. become ”unproductive”), but these can be easily eliminated at compile time. 6This is easy to see by considering a CFG generating the non finite-state language anbn. to always produce valid derivation sequences unless they can exploit the underlying grammar for constraining the next choice. 3.2 Sequence prediction models In all these models, we start from a natural utterance NL and we predict a sequence of target items, according to a common sequence prediction architecture that will be described in section 3.3. 3.2.1 Predicting logical form (LFP model) The most direct approach is to directly predict a linearization of the logical form from NL, the input question. While an LF such as that of Figure 1 is really a structured object respecting certain implicit constraints (balanced parentheses, consistency of the variables bound by lambda expressions, and more generally, conformity with the underlying grammar), the linearization treats it simply as a sequence of tokens: get [ [ lambda s [ filter s pubDate = 1950 ] ] article ]. At training time, the LFP model only sees such sequences, and at test time, the next token in the target sequence is then predicted without taking into account any structural constraints. The training regime is the standard one attempting to minimize the cross-entropy of the model relative to the logical forms in the training set. 3.2.2 Predicting derivation sequence (DSP-X models) Rather than predicting LF directly, we can choose to predict a derivation sequence DS, that is, a sequence of rule-names, and then project it onto LF. We consider three variants of this model. DSP This basic derivation sequence prediction model is trained on pairs (NL, DS) with the standard training regime. At test time, it is possible for this model to predict ill-formed sequences, which do not correspond to grammatical derivation trees, and therefore do not project onto any logical form. DSP-C This is a Constrained variant of DSP where we use the underlying grammar to constrain the next rule-name. We train this model exactly as the previous one, but at test time, when sampling the next rule-name inside the RNN, we reject any rule that is not a possible continuation. DSP-CL This last model is also constrained, but uses a different training regime, with Constrained Loss. In the standard learning regime (used for the 1344 Target sequence DS CF LF Length 10.5 11.8 47.0 Vocabulary Size 106.0 55.8 59.9 Table 1: Characteristics of different target sequences. two previous models), the incremental loss when predicting the next item yt of the sequence is computed as −log p(yt), where p(yt) is the probability of yt according to the RNN model, normalized (through the computation of a softmax) over all the potential values of yt (namely, here, all the rules in the grammar). By contrast, in the CL learning regime, the incremental loss is computed as −log p′(yt), where p′(yt) is normalized only over the values of yt that are possible continuations once the grammar-induced constraints are taken into account, ignoring whatever weights the RNN predictor may (wrongly) believe should be put on impossible continuations. In other words, the DSP-CL model incorporates the prior knowledge about well-formed derivation sequences that we have thanks to the grammar. It computes the actual cross-entropy loss according to the underlying generative process of the model that is used once the constraints are taken into account. 3.2.3 Predicting canonical form (CFP model) The last possibility we explore is to predict the sequence of words in the canonical form CF, and then use our grammar to parse this CF into its corresponding LF, which we then execute against the knowledge base.7 Table 1 provides length and vocabulary-size statistics for the LFP, DSP and CFP tasks. We see that, typically, for the different domains, DS is a shorter sequence than LF or CF, but its vocabulary size (i.e. number of rules) is larger than that of LF or CF. However DS is unique in allowing us to easily validate grammatical constraints. We also note that the CF is less lengthy than the 7Although the general intention of SPO is to unambiguously reflect the logical form through the canonical form (which is the basis on which Turkers provide their paraphrases), we do encounter some cases where, although the CF is well-formed and therefore parsable by the grammar, several parses are actually possible, some of which do not correspond to queries for which the KB can return an answer. In these cases, we return the first parse whose logical form does return an answer. Such situations could be eliminated by refining the SPO grammar to a moderate extent, but we did not pursue this. article 𝑢𝑏 whose publication 𝑢𝑙,𝑡 𝑢𝑙,𝑡+1 𝑢𝑙,𝑡+2 LSTM encoding for the prefix of a sequence of items 𝑢𝑏 𝑢𝑏 whose publication date Figure 5: Our neural network model which is shared between all the systems. An MLP encodes the sentence in unigrams and bigrams and produces ub. An LSTM encodes the prefix of the predicted sequence generating ul,t for each step t. The two representations are then fed into a final MLP to predict the next choice of the target sequence. LF, which uses a number of non “word-like” symbols such as parentheses, lambda variables, and the like. 3.3 Sequence prediction architecture 3.3.1 Neural network model The goal of our neural network is to estimate the conditional probability p(y1, . . . , yT ′|x1, . . . , xT ) where (x1, . . . , xT ) is a natural language question and (y1, . . . , yT ′) is a target sequence (linearized LF, CF or derivation sequence). In all three cases, we use the same neural network model, which we explain in this subsection. Suppose that the content of the NL is captured in a real-valued vector ub, while the prefix of the target sequence up to time t is captured in another real-valued vector ul,t. Now, the probability of the target sequence given the input question can be estimated as: p(y1, . . . yT ′|x1, . . . , xT ) = T ′ Y t=1 p(yt|ub, y1, . . . yt−1) = T ′ Y t=1 p(yt|ub, ul,t−1) In all our systems, the ub capturing the content of the NL is calculated from the concatenation of a vector u1 reading the sentence based on unigrams and another vector u2 reading the sentence based on bigrams. Mathematically, u1 = tanh(W1v1) where v1 is the 1-hot unigram encoding of the NL 1345 and u2 = tanh(W2v2) where v2 is its 1-hot bigram encoding. Then ub = tanh(Wu), where u is the concatenation of u1 and u2. W1, W2 and W are among the parameters to be learnt. For regularization purposes, a dropout procedure (Srivastava et al., 2014) is applied to u1 and u2. The prefix of the target sequence up to time t is modelled with the vector ul,t generated by the latest hidden state of an LSTM (Hochreiter and Schmidhuber, 1997); LSTM is appropriate here in order to capture the long distance dependencies inside the target sequence. The vector ul,t is then concatenated with ub (forming ubl in the equation below) before passing through a two-layer MLP (Multi-Layer Perceptron) for the final prediction: p(yt+1|ul,t, ub) = softmax(W ′ 2 tanh(W ′ 1ubl)) Using deep structures such as this MLP for RNN prediction has been shown to be beneficial in previous work (Pascanu et al., 2013). The overall network architecture is summarized in Figure 5. We train the whole network to minimize the cross entropy between the predicted sequence of items and the reference sequence. This network architecture can easily support other representations for the input sentence than unigrams and bigrams, as long as they are realvalued vectors of fixed length. We can just concatenate them with u1 and u2 and generate ub as previously. In fact, in initial experiments, we did concatenate an additional representation which reads the sentence through an LSTM, but the performance was not improved. 3.3.2 Decoding the target sequence We implemented a uniform-cost search algorithm (Russell and Norvig, 2003) to decode the best decision sequence as the sequence with the highest probability. The algorithm finishes in a reasonable time for two reasons: 1) as indicated by Table 1, the vocabulary size of each domain is relatively small, and 2) we found that our model predicts relatively peaked distributions. Of course, it would also be easy to use a beam-search procedure, for situations where these conditions would not hold. 4 Experiments 4.1 Setup We conduct our experiments on the SPO dataset. To test the overall performance of a semantic parser, the SPO dataset contains seven domains focusing on different linguistic phenomena such as multi-arity relations, sublexical compositionality etc. The utterances in each domain are annotated both with logical forms (LFs) and canonical forms (CFs). The number of such utterances vary from 800 to 4000 depending on the domain. The size of training data is indeed small but as the target vocabulary is always in the domain, thus very small as well, it is actually possible to learn a reasonable semantic parser. In the SPO dataset, the natural utterances were split randomly into 80%-20% for training and test, and we use the same sets. We perform an additional 80%-20% random split on the SPO training data and keep the 20% as development set to choose certain hyperparameters of our model. Once the hyperparameters are chosen, we retrain on the whole training data before testing. For LFP experiments, we directly tokenize the LF, as explained earlier, and for CFP experiments we directly use the CF. For DSP experiments (DSP, DSP-C, DSP-CL) where our training data consist of (NL, DS) pairs, the derivation sequences are obtained by parsing each canonical form using the DCG grammar of section 3. We compare our different systems to SPO. While we only use unigram and bigram features on the NL, SPO uses a number of features of different kinds: linguistic features on NL such as POS tags, lexical features computing the similarity between words in NL and words in CF, semantic features on types and denotations, and also features based on PPDB (Ganitkevitch et al., 2013). At test time, like SPO, we evaluate our system on the proportion of questions for which the system is able to find the correct answer in the knowledge base. 4.2 Implementation details We choose the embedding vectors u1 for unigrams and u2 for bigrams to have 50 dimensions. The vector ub representing the sentence content has 200 dimensions. The word embedding layer has 100 dimensions, which is also the case of the hidden layer of the LSTM ul,t. Thus ubl which is the concatenation of ub and ul,t has 300 dimensions and we fix the next layer to ubl to have 100 dimensions. The model is implemented in Keras8 on top of Theano (Bergstra et al., 2010). For all the exper8https://github.com/fchollet/keras 1346 iments, we train our models using rmsprop (Tieleman and Hinton., 2012) as the backpropagation algorithm9. We use our development set to select the number of training epochs, the dropout factor over unigrams representation and the dropout factor over bigrams representation, by employing a grid search over these hyperparameters: epochs in {20, 40, 60}, unigrams dropout in {0.05, 0.1} and bigrams dropout in {0.1, 0.2, 0.3}. 4.3 Experimental results 4.3.1 Results on test data Table 2 shows the test results of SPO and of our different systems over the seven domains. It can be seen that all of our sequence-based systems are performing better than SPO by a large margin on these tests. When averaging over the seven domains, our ‘worst’ system DSP scores at 64.7% compared to SPO at 57.1%. We note that these positive results hold despite the fact that DSP has the handicap that it may generate ungrammatical sequences relative to the underlying grammar, which do not lead to interpretable LFs. The LFP and CFP models, with higher performance than DSP, also may generate ungrammatical sequences. The best results overall are obtained by the DSP-C system, which does take into account the grammatical constraints. This model performs not only considerably better than its DSP baseline (72.7% over 64.7%), but also better than the models LFP and CFP. Somewhat contrary to our expectations, the DSP-CL model, which exploits constraints not only during decoding, but also during training, performs somewhat worse than the DSP-C, which only exploits them during decoding. We note that, for all the sequence based models, we strictly base our results on the performance of the first sequence predicted by the model. It would probably be possible to improve them further by reranking n-best sequence lists using a set of features similar to those used by SPO. 4.4 Analysis of results 4.4.1 Grammatical errors We just observed that CFP and LFP perform well on test data although the sequences generated are 9All the hyperparameters of rmsprop as well as options for initializing the neural network are left at their default values in Keras. Basketball Publication Housing LFP 6.6 3.7 1.6 CFP 1.8 1.9 2.2 DSP 9.5 11.8 5.8 DSP-C(L) 0.0 0.0 0.0 Table 3: Grammatical error rate of different systems on test. not guaranteed to be grammatical. We analysed the percentage of grammatical errors made by these models and also by DSP for three domains, which we report in Table 3.10 The table shows that LFP and especially CFP make few grammatical errors while DSP makes them more frequently. For DSP-C and DSP-CL, the error rate is always 0 since by construction, the derivations must be well-formed. Note that as DSP is not constrained by prior knowledge about the grammar, the grammatical error rate can be high – even higher than CFP or LFP because DSP typically has to choose among more symbols, see Table 1. 4.4.2 Difference between DSP-C and DSP-CL We observed that the DSP-CL model performs somewhat worse than DSP-C in our experiments. While we were a bit surprised by that behavior, given that the DSP-CL has strong theoretical motivations, let us note that the two models are quite different. To stress the difference, suppose that, for a certain prediction step, only two rules are considered as possible by the grammar, among the many rules of the grammar. Suppose that the LSTM gives probabilities 0.004 and 0.006 respectively to these two rules, the rest of the mass being on the ungrammatical rules. While the DSP-C model associates respective losses of −log 0.004, −log 0.006 with the two rules, the DSP-CL model normalizes the probabilites first, resulting in smaller losses −log 0.4, −log 0.6. As we choose the best complete sequence during decoding, it means that DSP-C will be more likely to prefer to follow a different path in such a case, in order not to incur a loss of at least −log 0.006. Intuitively, this means that DSPC will prefer paths where the LSTM on its own 10Our DCG permits to compute this error rate directly for canonical forms and derivation sequences. For logical forms, we made an estimation by executing them against the knowledge base and eliminating the cases where the errors are not due to the ungrammaticality of the logical form. 1347 Basketball Social Publication Blocks Calendar Housing Restaurants Avg SPO 46.3 48.2 59.0 41.9 74.4 54.0 75.9 57.1 LFP 73.1 70.2 72.0 55.4 71.4 61.9 76.5 68.6 CFP 80.3 79.5 70.2 54.1 73.2 63.5 71.1 70.3 DSP 71.6 67.5 64.0 53.9 64.3 55.0 76.8 64.7 DSP-C 80.5 80.0 75.8 55.6 75.0 61.9 80.1 72.7 DSP-CL 80.6 77.6 70.2 53.1 75.0 59.3 74.4 70.0 Table 2: Test results over different domains on SPO dataset. The numbers reported correspond to the proportion of cases in which the predicted LF is interpretable against the KB and returns the correct answer. LFP = Logical Form Prediction, CFP = Canonical Form Prediction, DSP = Derivation Sequence Prediction, DSP-C = Derivation Sequence constrained using grammatical knowledge, DSP-CL = Derivation Sequence using a loss function constrained by grammatical knowledge. gives small probability to ungrammatical choices, a property not shared by DSP-CL. However, a more complete understanding of the difference will need more investigation. 5 Related Work and Discussion In recent work on developing semantic parsers for open-domain and domain-specific question answering, various methods have been proposed to handle the mismatch between natural language questions and knowledge base representations including, graph matching, paraphrasing and embeddings techniques. Reddy et al. (2014) exploits a weak supervision signal to learn a mapping between the logical form associated by a CCG based semantic parser with the input question and the appropriate logical form in Freebase (Bollacker et al., 2008). Paraphrase-based approaches (Fader et al., 2013; Berant and Liang, 2014) generate variants of the input question using a simple hand-written grammar and then rank these using a paraphrase model. That is, in their setting, the logical form assigned to the input question is that of the generated sentence which is most similar to the input question. Finally, Bordes et al. (2014b; 2014a) learn a similarity function between a natural language question and the knowledge base formula encoding its answer. We depart from these approaches in that we learn a direct mapping between natural language questions and their corresponding logical form or equivalently, their corresponding derivation and canonical form. This simple, very direct approach to semantic parsing eschews the need for complex feature engineering and large external resources required by such paraphrase-based approaches as (Fader et al., 2013; Berant and Liang, 2014). It is conceptually simpler than the two steps, graph matching approach proposed by Reddy et al. (2014). And it can capture much more complex semantic representations than Bordes et al. (2014b; 2014a)’s embeddings based method.11 At a more abstract level, our approach differs from previous work in that it exploits the fact that logical forms are structured objects whose shape is determined by an underlying grammar. Using the power of RNN as sequence predictors, we learn to predict, from more or less explicit representations of this underlying grammar, equivalent but different representations of a sentence content namely, its canonical form, its logical form and its derivation sequence. We observe that the best results are obtained by using the derivation sequence, when also exploiting the underlying grammatical constraints. However the results obtained by predicting directly the linearization of the logical form or canonical form are not far behind; we show that often, the predicted linearizations actually satisfy the underlying grammar. This observation can be related to the results obtained by Vinyals et al. (2014), who use an RNN-based model to map a sentence to the linearization of its parse tree,12 and find that in most cases, the predicted sequence produces well-balanced parentheses. It would be interest11In (Bordes et al., 2014b; Bordes et al., 2014a), the logical forms denoting the question answers involve only few RDF triples consisting of a subject, a property and an object i.e., a binary relation and its arguments. 12Note a crucial difference with our approach. While in their case the underlying (“syntactic”) grammar is only partially and implicitly represented by a set of parse annotations, in our case the explicit (“semantic”) grammar is known a priori and can be exploited as such. 1348 ing to see if our observation would be maintained for more complex LFs than the ones we tested on, where it might be more difficult for the RNN to predict not only the parentheses, but also the dependencies between several lambda variables inside the overall structure of the LF. 6 Conclusion and Future Work We propose a sequence-based approach for the task of semantic parsing. We encode the target logical form, a structured object, through three types of sequences: direct linearization of the logical form, canonical form, derivation sequence in an underlying grammar. In all cases, we obtain competitive results with previously reported experiments. The most effective model is one using derivation sequences and taking into account the grammatical constraints. In order to encode the underlying derivation tree, we chose to use a leftmost derivation sequence. But there are other possible choices that might make the encoding even more easily learnable by the LSTM, and we would like to explore those in future work. In order to improve performance, other promising directions would involve adding re-reranking techniques and extending our neural networks with attention models in the spirit of (Bahdanau et al., 2015). Acknowledgements We would like to thank Guillaume Bouchard for his advice on a previous version of this work, as well as the anonymous reviewers for their constructive feedback. The authors gratefully acknowledge the support of the Association Nationale de la Recherche Technique (ANRT), Convention Industrielle de Formation par la Recherche (CIFRE) No. 2014/0476 . References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). J. Berant and P. Liang. 2014. Semantic parsing via paraphrasing. In Annual Meeting for the Association for Computational Linguistics (ACL). J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), jun. Oral. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD Conference, pages 1247–1250. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. In Empirical Methods in Natural Language Processing (EMNLP), pages 615–620. Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014b. Open question answering with weakly supervised embedding models. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery (ECML-PKDD). Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1608–1618. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 758–764. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke S. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing, (EMNLP), pages 1545–1556. Razvan Pascanu, C¸ aglar G¨ulc¸ehre, Kyunghyun Cho, and Yoshua Bengio. 2013. How to construct deep recurrent neural networks. CoRR, abs/1312.6026. 1349 Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL Volume 1: Long Papers. Fernando C.N. Pereira and David H.D. Warren. 1980. Definite clause grammars for language analysis a survey of the formalism and a comparison with augmented transition networks. Artificial Intelligence, 13:231 – 278. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics, 2:377–392. Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Pearson Education, 2 edition. Stuart M. Shieber. 2014. Bimorphisms and synchronous grammars. J. Language Modelling, 2(1):51–104. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research (JMLR), 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 3104–3112. T. Tieleman and G. E. Hinton. 2012. Lecture 6.5rmsprop: Divide the gradient by a running average of its recent magnitude. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2014. Grammar as a foreign language. Neural Information Processing Systems (NIPS). Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL Volume 1: Long Papers, pages 1332–1342. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Peihao Su, David Vandyke, and Steve J. Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Empirical Methods in Natural Language Processing (EMNLP). 1350
2016
127
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1351–1360, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning Word Meta-Embeddings Wenpeng Yin and Hinrich Sch¨utze Center for Information and Language Processing LMU Munich, Germany [email protected] Abstract Word embeddings – distributed representations of words – in deep learning are beneficial for many tasks in NLP. However, different embedding sets vary greatly in quality and characteristics of the captured information. Instead of relying on a more advanced algorithm for embedding learning, this paper proposes an ensemble approach of combining different public embedding sets with the aim of learning metaembeddings. Experiments on word similarity and analogy tasks and on part-of-speech tagging show better performance of metaembeddings compared to individual embedding sets. One advantage of metaembeddings is the increased vocabulary coverage. We release our metaembeddings publicly at http:// cistern.cis.lmu.de/meta-emb. 1 Introduction Recently, deep neural network (NN) models have achieved remarkable results in NLP (Collobert and Weston, 2008; Sutskever et al., 2014; Yin and Sch¨utze, 2015). One reason for these results are word embeddings, compact distributed word representations learned in an unsupervised manner from large corpora (Bengio et al., 2003; Mnih and Hinton, 2009; Mikolov et al., 2013a; Pennington et al., 2014). Some prior work has studied differences in performance of different embedding sets. For example, Chen et al. (2013) show that the embedding sets HLBL (Mnih and Hinton, 2009), SENNA (Collobert and Weston, 2008), Turian (Turian et al., 2010) and Huang (Huang et al., 2012) have great variance in quality and characteristics of the semantics captured. Hill et al. (2014; 2015a) show that embeddings learned by NN machine translation models can outperform three representative monolingual embedding sets: word2vec (Mikolov et al., 2013b), GloVe (Pennington et al., 2014) and CW (Collobert and Weston, 2008). Bansal et al. (2014) find that Brown clustering, SENNA, CW, Huang and word2vec yield significant gains for dependency parsing. Moreover, using these representations together achieved the best results, suggesting their complementarity. These prior studies motivate us to explore an ensemble approach. Each embedding set is trained by a different NN on a different corpus, hence can be treated as a distinct description of words. We want to leverage this diversity to learn better-performing word embeddings. Our expectation is that the ensemble contains more information than each component embedding set. The ensemble approach has two benefits. First, enhancement of the representations: metaembeddings perform better than the individual embedding sets. Second, coverage: metaembeddings cover more words than the individual embedding sets. The first three ensemble methods we introduce are CONC, SVD and 1TON and they directly only have the benefit of enhancement. They learn metaembeddings on the overlapping vocabulary of the embedding sets. CONC concatenates the vectors of a word from the different embedding sets. SVD performs dimension reduction on this concatenation. 1TON assumes that a metaembedding for the word exists, e.g., it can be a randomly initialized vector in the beginning, and uses this metaembedding to predict representations of the word in the individual embedding sets by projections – the resulting fine-tuned metaembedding is expected to contain knowledge from all individual embedding sets. To also address the objective of increased coverage of the vocabulary, we introduce 1TON+, 1351 a modification of 1TON that learns metaembeddings for all words in the vocabulary union in one step. Let an out-of-vocabulary (OOV) word w of embedding set ES be a word that is not covered by ES (i.e., ES does not contain an embedding for w).1 1TON+ first randomly initializes the embeddings for OOVs and the metaembeddings, then uses a prediction setup similar to 1TON to update metaembeddings as well as OOV embeddings. Thus, 1TON+ simultaneously achieves two goals: learning metaembeddings and extending the vocabulary (for both metaembeddings and invidual embedding sets). An alternative method that increases coverage is MUTUALLEARNING. MUTUALLEARNING learns the embedding for a word that is an OOV in embedding set from its embeddings in other embedding sets. We will use MUTUALLEARNING to increase coverage for CONC, SVD and 1TON, so that these three methods (when used together with MUTUALLEARNING) have the advantages of both performance enhancement and increased coverage. In summary, metaembeddings have two benefits compared to individual embedding sets: enhancement of performance and improved coverage of the vocabulary. Below, we demonstrate this experimentally for three tasks: word similarity, word analogy and POS tagging. If we simply view metaembeddings as a way of coming up with better embeddings, then the alternative is to develop a single embedding learning algorithm that produces better embeddings. Some improvements proposed before have the disadvantage of increasing the training time of embedding learning substantially; e.g., the NNLM presented in (Bengio et al., 2003) is an order of magnitude less efficient than an algorithm like word2vec and, more generally, replacing a linear objective function with a nonlinear objective function increases training time. Similarly, fine-tuning the hyperparameters of the embedding learning algorithm is complex and time consuming. In terms of coverage, one might argue that we can retrain an existing algorithm like word2vec on a bigger corpus. However, that needs much longer training time than our simple ensemble approaches which achieve coverage as well as enhancement with less effort. In many cases, it is not possible to retrain 1We do not consider words in this paper that are not covered by any of the individual embedding sets. OOV refers to a word that is covered by a proper subset of ESs. using a different algorithm because the corpus is not publicly available. But even if these obstacles could be overcome, it is unlikely that there ever will be a single “best” embedding learning algorithm. So the current situation of multiple embedding sets with different properties being available is likely to persist for the forseeable future. Metaembedding learning is a simple and efficient way of taking advantage of this diversity. As we will show below they combine several complementary embedding sets and the resulting metaembeddings are stronger than each individual set. 2 Related Work Related work has focused on improving performance on specific tasks by using several embedding sets simultaneously. To our knowledge, there is no work that aims to learn generally useful metaembeddings from individual embedding sets. Tsuboi (2014) incorporates word2vec and GloVe embeddings into a POS tagging system and finds that using these two embedding sets together is better than using them individually. Similarly, Turian et al. (2010) find that using Brown clusters, CW embeddings and HLBL embeddings for Name Entity Recognition and chunking tasks together gives better performance than using these representations individually. Luo et al. (2014) adapt CBOW (Mikolov et al., 2013a) to train word embeddings on different datasets – a Wikipedia corpus, search clickthrough data and user query data – for web search ranking and for word similarity. They show that using these embeddings together gives stronger results than using them individually. Both (Yin and Sch¨utze, 2015) and (Zhang et al., 2016) try to incorporate multiple embedding sets into channels of convolutional neural network system for sentence classification tasks. The better performance also hints the complementarity of component embedding sets, however, such kind of incorporation brings large numbers of training parameters. In sum, these papers show that using multiple embedding sets is beneficial. However, they either use embedding sets trained on the same corpus (Turian et al., 2010) or enhance embedding sets by more training data, not by innovative learning algorithms (Luo et al., 2014), or make the system architectures more complicated (Yin and 1352 Vocab Size Dim Training Data HLBL (Mnih and Hinton, 2009) 246,122 100 Reuters English newswire August 1996-August 1997 Huang (Huang et al., 2012) 100,232 50 April 2010 snapshot of Wikipedia Glove (Pennington et al., 2014) 1,193,514 300 42 billion tokens of web data, from Common Crawl CW (Collobert and Weston, 2008) 268,810 200 Reuters English newswire August 1996-August 1997 word2vec (Mikolov et al., 2013b) 929,022 300 About 100 billion tokens from Google News Table 1: Embedding Sets (Dim: dimensionality of word embeddings). Sch¨utze, 2015; Zhang et al., 2016). In our work, we can leverage any publicly available embedding set learned by any learning algorithm. Our metaembeddings (i) do not require access to resources such as large computing infrastructures or proprietary corpora; (ii) are derived by fast and simple ensemble learning from existing embedding sets; and (iii) have much lower dimensionality than a simple concatentation, greatly reducing the number of parameters in any system that uses them. An alternative to learning metaembeddings from embeddings is the MVLSA method that learns powerful embeddings directly from multiple data sources (Rastogi et al., 2015). Rastogi et al. (2015) combine a large number of data sources and also run two experiments on the embedding sets Glove and word2vec. In contrast, our focus is on metaembeddings, i.e., embeddings that are exclusively based on embeddings. The advantages of metaembeddings are that they outperform individual embeddings in our experiments, that few computational resources are needed, that no access to the original data is required and that embeddings learned by new powerful (including nonlinear) embedding learning algorithms in the future can be immediately taken advantage of without any changes being necessary to our basic framework. In future work, we hope to compare MVLSA and metaembeddings in effectiveness (Is using the original corpus better than using embeddings in some cases?) and efficiency (Is using SGD or SVD more efficient and in what circumstances?). 3 Experimental Embedding Sets In this work, we use five released embedding sets. (i) HLBL. Hierarchical log-bilinear (Mnih and Hinton, 2009) embeddings released by Turian et al. (2010);2 246,122 word embeddings, 100 dimensions; training corpus: RCV1 corpus (Reuters English newswire, August 1996 – August 1997). 2metaoptimize.com/projects/wordreprs (ii) Huang.3 Huang et al. (2012) incorporate global context to deal with challenges raised by words with multiple meanings; 100,232 word embeddings, 50 dimensions; training corpus: April 2010 snapshot of Wikipedia. (iii) GloVe4 (Pennington et al., 2014). 1,193,514 word embeddings, 300 dimensions; training corpus: 42 billion tokens of web data, from Common Crawl. (iv) CW (Collobert and Weston, 2008). Released by Turian et al. (2010);5 268,810 word embeddings, 200 dimensions; training corpus: same as HLBL. (v) word2vec (Mikolov et al., 2013b) CBOW;6 929,022 word embeddings (we discard phrase embeddings), 300 dimensions; training corpus: Google News (about 100 billion words). Table 1 gives a summary of the five embedding sets. The intersection of the five vocabularies has size 35,965, the union has size 2,788,636. 4 Ensemble Methods This section introduces the four ensemble methods: CONC, SVD, 1TON and 1TON+. 4.1 CONC: Concatenation In CONC, the metaembedding of w is the concatenation of five embeddings, one each from the five embedding sets. For GloVe, we perform L2 normalization for each dimension across the vocabulary as recommended by the GloVe authors. Then each embedding of each embedding set is L2-normalized. This ensures that each embedding set contributes equally (a value between -1 and 1) when we compute similarity via dot product. We would like to make use of prior knowledge and give more weight to well performing embedding sets. In this work, we give GloVe and word2vec weight i > 1 and weight 1 to the other three embedding sets. We use MC30 (Miller and Charles, 1991) as dev set, since all embedding sets fully cover it. We set i = 8, the value in Figure 1 3ai.stanford.edu/˜ehhuang 4nlp.stanford.edu/projects/glove 5metaoptimize.com/projects/wordreprs 6code.google.com/p/Word2Vec 1353 where performance reaches a plateau. After L2 normalization, GloVe and word2vec embeddings are multiplied by i and remaining embedding sets are left unchanged. The dimensionality of CONC metaembeddings is k = 100+50+300+200+300 = 950. We also tried equal weighting, but the results were much worse, hence we skip reporting it. It nevertheless gives us insight that simple concatenation, without studying the difference among embedding sets, is unlikely to achieve enhancement. The main disadvantage of simple concatenation is that word embeddings are commonly used to initialize words in DNN systems; thus, the high-dimensionality of concatenated embeddings causes a great increase in training parameters. 4.2 SVD: Singular Value Decomposition We do SVD on above weighted concatenation vectors of dimension k = 950. Given a set of CONC representations for n words, each of dimensionality k, we compute an SVD decomposition C = USV T of the corresponding n×k matrix C. We then use Ud, the first d dimensions of U, as the SVD metaembeddings of the n words. We apply L2-normalization to embeddings; similarities of SVD vectors are computed as dot products. d denotes the dimensionality of metaembeddings in SVD, 1TON and 1TON+. We use d = 200 throughout and investigate the impact of d below. 4.3 1TON Figure 2 depicts the simple neural network we employ to learn metaembeddings in 1TON. White i 2 4 6 8 10 12 14 16 18 20 performance 0.65 0.7 0.75 0.8 0.85 0.9 MC30 Figure 1: Performance vs. Weight scalar i rectangles denote known embeddings. The target to learn is the metaembedding (shown as shaded rectangle). Metaembeddings are initialized randomly. Figure 2: 1toN Let c be the number of embedding sets under consideration, V1, V2, . . . , Vi, . . . , Vc their vocabularies and V ∩= ∩c i=1Vi the intersection, used as training set. Let V∗denote the metaembedding space. We define a projection f∗i from space V∗to space Vi (i = 1, 2, . . . , c) as follows: ˆwi = M∗iw∗ (1) where M∗i ∈Rdi×d, w∗∈Rd is the metaembedding of word w in space V∗and ˆwi ∈Rdi is the projected (or learned) representation of word w in space Vi. The training objective is as follows: E = c X i=1 ki ·( X w∈V ∩ |ˆwi −wi|2 +l2 ·|M∗i|2) (2) In Equation 2, ki is the weight scalar of the ith embedding set, determined in Section 4.1, i.e, ki = 8 for GloVe and word2vec embedding sets, otherwise ki = 1; l2 is the weight of L2 normalization. The principle of 1TON is that we treat each individual embedding as a projection of the metaembedding, similar to principal component analysis. An embedding is a description of the word based on the corpus and the model that were used to create it. The metaembedding tries to recover a more comprehensive description of the word when it is trained to predict the individual descriptions. 1TON can also be understood as a sentence modeling process, similar to DBOW (Le and Mikolov, 2014). The embedding of each word in a sentence s is a partial description of s. DBOW combines all partial descriptions to form a comprehensive description of s. DBOW initializes the sentence representation randomly, then uses this representation to predict the representations of individual words. The sentence representation of s corresponds to the metaembedding in 1TON; and the representations of the words in s correspond to the five embeddings for a word in 1TON. 1354 4.4 1TON+ Recall that an OOV (with respect to embedding set ES) is defined as a word unknown in ES. 1TON+ is an extension of 1TON that learns embeddings for OOVs; thus, it does not have the limitation that it can only be run on overlapping vocabulary. Figure 3: 1toN+ Figure 3 depicts 1TON+. In contrast to Figure 2, we assume that the current word is an OOV in embedding sets 3 and 5. Hence, in the new learning task, embeddings 1, 2, 4 are known, and embeddings 3 and 5 and the metaembedding are targets to learn. We initialize all OOV representations and metaembeddings randomly and use the same mapping formula as for 1TON to connect a metaembedding with the individual embeddings. Both metaembedding and initialized OOV embeddings are updated during training. Each embedding set contains information about only a part of the overall vocabulary. However, it can predict what the remaining part should look like by comparing words it knows with the information other embedding sets provide about these words. Thus, 1TON+ learns a model of the dependencies between the individual embedding sets and can use these dependencies to infer what the embedding of an OOV should look like. CONC, SVD and 1TON compute metaembeddings only for the intersection vocabulary. 1TON+ computes metaembeddings for the union of all individual vocabularies, thus greatly increasing the coverage of individual embedding sets. 5 MUTUALLEARNING MUTUALLEARNING is a method that extends CONC, SVD and 1TON such that they have increased coverage of the vocabulary. With MUTUALLEARNING, all four ensemble methods – CONC, SVD, 1TON and 1TON+ – have the benefits of both performance enhancement and increased coverage and we can use criteria like performance, compactness and efficiency of training bs lr l2 1TON 200 0.005 5 × 10−4 MUTUALLEARNING (ml) 200 0.01 5 × 10−8 1TON+ 2000 0.005 5 × 10−4 Table 2: Hyperparameters. bs: batch size; lr: learning rate; l2: L2 weight. to select the best ensemble method for a particular application. MUTUALLEARNING is applied to learn OOV embeddings for all c embedding sets; however, for ease of exposition, let us assume we want to compute embeddings for OOVs for embedding set j only, based on known embeddings in the other c −1 embedding sets, with indexes i ∈{1 . . . j − 1, j + 1 . . . c}. We do this by learning c −1 mappings fij, each a projection from embedding set Ei to embedding set Ej. Similar to Section 4.3, we train mapping fij on the intersection Vi ∩Vj of the vocabularies covered by the two embedding sets. Formally, ˆwj = fij(wi) = Mijwi where Mij ∈Rdj×di, wi ∈Rdi denotes the representation of word w in space Vi and ˆwj is the projected metaembedding of word w in space Vj. Training loss has the same form as Equation 2 except that there is no “Pc i=1 ki” term. A total of c −1 projections fij are trained to learn OOV embeddings for embedding set j. Let w be a word unknown in the vocabulary Vj of embedding set j, but known in V1, V2, . . . , Vk. To compute an embedding for w in Vj, we first compute the k projections f1j(w1), f2j(w2), . . ., fkj(wk) from the source spaces V1, V2, . . . , Vk to the target space Vj. Then, the element-wise average of f1j(w1), f2j(w2), . . ., fkj(wk) is treated as the representation of w in Vj. Our motivation is that – assuming there is a true representation of w in Vj and assuming the projections were learned well – we would expect all the projected vectors to be close to the true representation. Also, each source space contributes potentially complementary information. Hence averaging them is a balance of knowledge from all source spaces. 6 Experiments We train NNs by back-propagation with AdaGrad (Duchi et al., 2011) and mini-batches. Table 2 gives hyperparameters. We report results on three tasks: word similarity, word analogy and POS tagging. 1355 Model SL999 WS353 MC30 MEN RW sem. syn. tot. ind-full 1 HLBL 22.1 (1) 35.7 (3) 41.5 (0) 30.7 (128) 19.1 (892) 27.1 (423) 22.8 (198) 24.7 2 Huang 9.7 (3) 61.7 (18) 65.9 (0) 30.1 (0) 6.4 (982) 8.4 (1016) 11.9 (326) 10.4 3 GloVe 45.3 (0) 75.4 (18) 83.6 (0) 81.6 (0) 48.7 (21) 81.4 (0) 70.1 (0) 75.2 4 CW 15.6 (1) 28.4 (3) 21.7 (0) 25.7 (129) 15.3 (896) 17.4 (423) 5.0 (198) 10.5 5 W2V 44.2 (0) 69.8 (0) 78.9 (0) 78.2 (54) 53.4 (209) 77.1 (0) 74.4 (0) 75.6 ind-overlap 6 HLBL 22.3 (3) 34.8 (21) 41.5 (0) 30.4 (188) 22.2 (1212) 13.8 (8486) 15.4 (1859) 15.4 7 Huang 9.7 (3) 62.0 (21) 65.9 (0) 30.7 (188) 3.9 (1212) 27.9 (8486) 9.9 (1859) 10.7 8 GloVe 45.0 (3) 75.5 (21) 83.6 (0) 81.4 (188) 59.1 (1212) 91.1 (8486) 68.2 (1859) 69.2 9 CW 16.0 (3) 30.8 (21) 21.7 (0) 24.7 (188) 17.4 (1212) 11.2 (8486) 2.3 (1859) 2.7 10 W2V 44.1 (3) 69.3 (21) 78.9 (0) 77.9 (188) 61.5 (1212) 89.3 (8486) 72.6 (1859) 73.3 discard 11 CONC (-HLBL) 46.0 (3) 76.5 (21) 86.3 (0) 82.2 (188) 63.0 (1211) 93.2 (8486) 74.0 (1859) 74.8 12 CONC (-Huang) 46.1 (3) 76.5 (21) 86.3 (0) 82.2 (188) 62.9 (1212) 93.2 (8486) 74.0 (1859) 74.8 13 CONC (-GloVe) 44.0 (3) 69.4 (21) 79.1 (0) 77.9 (188) 61.5 (1212) 89.3 (8486) 72.7 (1859) 73.4 14 CONC (-CW) 46.0 (3) 76.5 (21) 86.6 (0) 82.2 (188) 62.9 (1212) 93.2 (8486) 73.9 (1859) 74.7 15 CONC (-W2V) 45.0 (3) 75.5 (21) 83.6 (0) 81.6 (188) 59.1 (1212) 90.9 (8486) 68.3 (1859) 69.2 16 SVD (-HLBL) 48.5 (3) 76.1 (21) 85.6 (0) 82.5 (188) 61.5 (1211) 90.6 (8486) 69.5 (1859) 70.4 17 SVD (-Huang) 48.8 (3) 76.5 (21) 85.4 (0) 83.0 (188) 61.7 (1212) 91.4 (8486) 69.8 (1859) 70.7 18 SVD (-GloVe) 46.2 (3) 66.9 (21) 81.6 (0) 78.8 (188) 59.1 (1212) 88.8 (8486) 67.3 (1859) 68.2 19 SVD (-CW) 48.5 (3) 76.1 (21) 85.7 (0) 82.5 (188) 61.5 (1212) 90.6 (8486) 69.5 (1859) 70.4 20 SVD (-W2V) 49.4 (3) 79.0 (21) 87.3 (0) 83.1 (188) 59.1 (1212) 90.3 (8486) 66.0 (1859) 67.1 21 1TON (-HLBL) 46.3 (3) 75.8 (21) 83.0 (0) 82.1 (188) 60.5 (1211) 91.9 (8486) 75.9 (1859) 76.5 22 1TON (-Huang) 46.5 (3) 75.8 (21) 82.3 (0) 82.4 (188) 60.5 (1212) 93.5 (8486) 76.3 (1859) 77.0 23 1TON (-GloVe) 43.4 (3) 67.5 (21) 75.6 (0) 76.1 (188) 57.3 (1212) 89.0 (8486) 73.8 (1859) 74.5 24 1TON (-CW) 47.4 (3) 76.5 (21) 84.8 (0) 82.9 (188) 62.3 (1212) 91.4 (8486) 73.1 (1859) 73.8 25 1TON (-W2V) 46.3 (3) 76.2 (21) 80.0 (0) 81.5 (188) 56.8 (1212) 92.2 (8486) 72.2 (1859) 73.0 26 1TON+ (-HLBL) 46.1 (3) 75.8 (21) 85.5 (0) 82.1 (188) 62.3 (1211) 92.2 (8486) 76.2 (1859) 76.9 27 1TON+ (-Huang) 46.2 (3) 76.1 (21) 86.3 (0) 82.4 (188) 62.2 (1212) 93.8 (8486) 76.1 (1859) 76.8 28 1TON+ (-GloVe) 45.3 (3) 71.2 (21) 80.0 (0) 78.8 (188) 62.5 (1212) 90.0 (8486) 73.3 (1859) 74.0 29 1TON+ (-CW) 46.9 (3) 78.1 (21) 85.5 (0) 82.5 (188) 62.7 (1212) 91.8 (8486) 73.3 (1859) 74.1 30 1TON+ (-W2V) 45.8 (3) 76.2 (21) 84.4 (0) 81.3 (188) 60.9 (1212) 92.4 (8486) 72.4 (1859) 73.2 ensemble 31 CONC 46.0 (3) 76.5 (21) 86.3 (0) 82.2 (188) 62.9 (1212) 93.2 (8486) 74.0 (1859) 74.8 32 SVD 48.5 (3) 76.0 (21) 85.7 (0) 82.5 (188) 61.5 (1212) 90.6 (8486) 69.5 (1859) 70.4 33 1TON 46.4 (3) 74.5 (21) 80.7 (0) 81.6 (188) 60.1 (1212) 91.9 (8486) 76.1 (1859) 76.8 34 1TON+ 46.3 (3) 75.3 (21) 85.2 (0) 80.8 (188) 61.6 (1212) 92.5 (8486) 76.3 (1859) 77.0 35 state-of-the-art 68.5 81.0 – – – – – – Table 3: Results on five word similarity tasks (Spearman correlation metric) and analogical reasoning (accuracy). The number of OOVs is given in parentheses for each result. “ind-full/ind-overlap”: individual embedding sets with respective full/overlapping vocabulary; “ensemble”: ensemble results using all five embedding sets; “discard”: one of the five embedding sets is removed. If a result is better than all methods in “ind-overlap”, then it is bolded. Significant improvement over the best baseline in “ind-overlap” is underlined (online toolkit from http://vassarstats.net/index.html for Spearman correlation metric, test of equal proportions for accuracy, p < .05). RW(21) semantic syntactic total RND AVG ml 1TON+ RND AVG ml 1TON+ RND AVG ml 1TON+ RND AVG ml 1TON+ ind HLBL 7.4 6.9 17.3 17.5 26.3 26.4 26.3 26.4 22.4 22.4 22.7 22.9 24.1 24.2 24.4 24.5 Huang 4.4 4.3 6.4 6.4 1.2 2.7 21.8 22.0 7.7 4.1 10.9 11.4 4.8 3.3 15.8 16.2 CW 7.1 10.6 17.3 17.7 17.2 17.2 16.7 18.4 4.9 5.0 5.0 5.5 10.5 10.5 10.3 11.4 ensemble CONC 14.2 16.5 48.3 – 4.6 18.0 88.1 – 62.4 15.1 74.9 – 36.2 16.3 81.0 – SVD 12.4 15.7 47.9 – 4.1 17.5 87.3 – 54.3 13.6 70.1 – 31.5 15.4 77.9 – 1TON 16.7 11.7 48.5 – 4.2 17.6 88.2 – 60.0 15.0 76.8 – 34.7 16.1 82.0 – 1TON+ – – – 48.8 – – – 88.4 – – – 76.3 – – – 81.1 Table 4: Comparison of effectiveness of four methods for learning OOV embeddings. RND: random initialization. AVG: average of embeddings of known words. ml: MUTUALLEARNING. RW(21) means there are still 21 OOVs for the vocabulary union. 1356 6.1 Word Similarity and Analogy Tasks We evaluate on SimLex-999 (Hill et al., 2015b), WordSim353 (Finkelstein et al., 2001), MEN (Bruni et al., 2014) and RW (Luong et al., 2013). For completeness, we also show results for MC30, the validation set. The word analogy task proposed in (Mikolov et al., 2013b) consists of questions like, “a is to b as c is to ?”. The dataset contains 19,544 such questions, divided into a semantic subset of size 8869 and a syntactic subset of size 10,675. Accuracy is reported. We also collect the state-of-the-art report for each task. SimLex-999: (Wieting et al., 2015), WS353: (Halawi et al., 2012). Not all state-ofthe-art results are included in Table 3. One reason is that a fair comparison is only possible on the shared vocabulary, so methods without released embeddings cannot be included. In addition, some prior systems can possibly generate better performance, but those literature reported lower results than ours because different hyperparameter setup, such as smaller dimensionality of word embeddings or different evaluation metric. In any case, our main contribution is to present ensemble frameworks which show that a combination of complementary embedding sets produces betterperforming metaembeddings. Table 3 reports results on similarity and analogy. Numbers in parentheses are the sizes of words in the datasets that are uncovered by intersection vocabulary. We do not consider them for fair comparison. Block “ind-full” (1-5) lists the performance of individual embedding sets on the full vocabulary. Results on lines 6-34 are for the intersection vocabulary of the five embedding sets: “ind-overlap” contains the performance of individual embedding sets, “ensemble” the performance of our four ensemble methods and “discard” the performance when one component set is removed. The four ensemble approaches are very promising (31-34). For CONC, discarding HLBL, Huang or CW does not hurt performance: CONC (31), CONC(-HLBL) (11), CONC(-Huang) (12) and CONC(-CW) (14) beat each individual embedding set (6-10) in all tasks. GloVe contributes most in SimLex-999, WS353, MC30 and MEN; word2vec contributes most in RW and word analogy tasks. SVD (32) reduces the dimensionality of CONC from 950 to 200, but still gains performance in SimLex-999 and MEN. GloVe contributes most in SVD (larger losses on line 18 vs. lines 16-17, 1920). Other embeddings contribute inconsistently. 1TON performs well only on word analogy, but it gains great improvement when discarding CW embeddings (24). 1TON+ performs better than 1TON: it has stronger results when considering all embedding sets, and can still outperform individual embedding sets while discarding HLBL (26), Huang (27) or CW (29). These results demonstrate that ensemble methods using multiple embedding sets produce stronger embeddings. However, it does not mean the more embedding sets the better. Whether an embedding set helps, depends on the complementarity of the sets and on the task. CONC, the simplest ensemble, has robust performance. However, size-950 embeddings as input means a lot of parameters to tune for DNNs. The other three methods (SVD, 1TON, 1TON+) have the advantage of smaller dimensionality. SVD reduces CONC’s dimensionality dramatically and still is competitive, especially on word similarity. 1TON is competitive on analogy, but weak on word similarity. 1TON+ performs consistently strongly on word similarity and analogy. Table 3 uses the metaembeddings of intersection vocabulary, hence it shows directly the quality enhancement by our ensemble approaches; this enhancement is not due to bigger coverage. System comparison of learning OOV embeddings. In Table 4, we extend the vocabularies of each individual embedding set (“ind” block) and our ensemble approaches (“ensemble” block) to the vocabulary union, reporting results on RW and analogy – these tasks contain the most OOVs. As both word2vec and GloVe have full coverage on analogy, we do not rereport them in this table. This subtask is specific to “coverage” property. Apparently, our mutual learning and 1TON+ can cover the union vocabulary, which is bigger than each individual embedding sets. But the more important issue is that we should keep or even improve the embedding quality, compared with their original embeddings in certain component sets. For each embedding set, we can compute the representation of an OOV (i) as a randomly initialized vector (RND); (ii) as the average of embeddings of all known words (AVG); (iii) by MUTUALLEARNING (ml) and (iv) by 1TON+. 1TON+ learns OOV embeddings for individual embedding sets and metaembeddings simultaneously, and it 1357 50 100 150 200 250 300 350 400 450 500 55 60 65 70 75 80 85 90 Dimension of svd Performance (%) WC353 MC RG SCWS RW (a) Performance vs. d of SVD Dimension of O2M 50 100 150 200 250 300 350 400 450 500 Performance (%) 50 55 60 65 70 75 80 85 WC353 MC RG SCWS RW (b) Performance vs. d of 1TON Dimension of O2M+ 50 100 150 200 250 300 350 400 450 500 Performance (%) 50 55 60 65 70 75 80 85 90 WC353 MC RG SCWS RW (c) Performance vs. d of 1TON+ Figure 4: Influence of dimensionality newsgroups reviews weblogs answers emails wsj ALL OOV ALL OOV ALL OOV ALL OOV ALL OOV ALL OOV baselines TnT 88.66 54.73 90.40 56.75 93.33 74.17 88.55 48.32 88.14 58.09 95.76 88.30 Stanford 89.11 56.02 91.43 58.66 94.15 77.13 88.92 49.30 88.68 58.42 96.83 90.25 SVMTool 89.14 53.82 91.30 54.20 94.21 76.44 88.96 47.25 88.64 56.37 96.63 87.96 C&P 89.51 57.23 91.58 59.67 94.41 78.46 89.08 48.46 88.74 58.62 96.78 88.65 FLORS 90.86 66.42 92.95 75.29 94.71 83.64 90.30 62.15 89.44 62.61 96.59 90.37 +indiv FLORS+HLBL 90.01 62.64 92.54 74.19 94.19 79.55 90.25 62.06 89.33 62.32 96.53 91.03 FLORS+Huang 90.68 68.53 92.86 77.88 94.71 84.66 90.62 65.04 89.62 64.46 96.65 91.69 FLORS+GloVe 90.99 70.64 92.84 78.19 94.69 86.16 90.54 65.16 89.75 65.61 96.65 92.03 FLORS+CW 90.37 69.31 92.56 77.65 94.62 84.82 90.23 64.97 89.32 65.75 96.58 91.36 FLORS+W2V 90.72 72.74 92.50 77.65 94.75 86.69 90.26 64.91 89.19 63.75 96.40 91.03 +meta FLORS+CONC 91.87 72.64 92.92 78.34 95.37 86.69 90.69 65.77 89.94 66.90 97.31 92.69 FLORS+SVD 90.98 70.94 92.47 77.88 94.50 86.49 90.75 64.85 89.88 65.99 96.42 90.36 FLORS+1TON 91.53 72.84 93.58 78.19 95.65 87.62 91.36 65.36 90.31 66.48 97.66 92.86 FLORS+1TON+ 91.52 72.34 93.14 78.32 95.65 87.29 90.77 65.28 89.93 66.72 97.14 92.55 Table 5: POS tagging results on six target domains. “baselines” lists representative systems for this task, including FLORS. “+indiv / +meta”: FLORS with individual embedding set / metaembeddings. Bold means higher than “baselines” and “+indiv”. would not make sense to replace these OOV embeddings computed by 1TON+ with embeddings computed by “RND/AVG/ml”. Hence, we do not report “RND/AVG/ml” results for 1TON+. Table 4 shows four interesting aspects. (i) MUTUALLEARNING helps much if an embedding set has lots of OOVs in certain task; e.g., MUTUALLEARNING is much better than AVG and RND on RW, and outperforms RND considerably for CONC, SVD and 1TON on analogy. However, it cannot make big difference for HLBL/CW on analogy, probably because these two embedding sets have much fewer OOVs, in which case AVG and RND work well enough. (ii) AVG produces bad results for CONC, SVD and 1TON on analogy, especially in the syntactic subtask. We notice that those systems have large numbers of OOVs in word analogy task. If for analogy “a is to b as c is to d”, all four of a, b, c, d are OOVs, then they are represented with the same average vector. Hence, similarity between b −a + c and each OOV is 1.0. In this case, it is almost impossible to predict the correct answer d. Unfortunately, methods CONC, SVD and 1TON have many OOVs, resulting in the low numbers in Table 4. (iii) MUTUALLEARNING learns very effective embeddings for OOVs. CONC-ml, 1TON-ml and SVD-ml all get better results than word2vec and GloVe on analogy (e.g., for semantic analogy: 88.1, 87.3, 88.2 vs. 81.4 for GloVe). Considering further their bigger vocabulary, these ensemble methods are very strong representation learning algorithms. (iv) The performance of 1TON+ for learning embeddings for OOVs is competitive with MUTUALLEARNING. For HLBL/Huang/CW, 1TON+ performs slightly better than MUTUALLEARNING in all four met1358 rics. Comparing 1TON-ml with 1TON+, 1TON+ is better than “ml” on RW and semantic task, while performing worse on syntactic task. Figure 4 shows the influence of dimensionality d for SVD, 1TON and 1TON+. Peak performance for different data sets and methods is reached for d ∈[100, 500]. There are no big differences in the averages across data sets and methods for high enough d, roughly in the interval [150, 500]. In summary, as long as d is chosen to be large enough (e.g., ≥150), performance is robust. 6.2 Domain Adaptation for POS Tagging In this section, we test the quality of those individual embedding embedding sets and our metaembeddings in a Part-of-Speech (POS) tagging task. For POS tagging, we add word embeddings into FLORS7 (Schnabel and Sch¨utze, 2014) which is the state-of-the-art POS tagger for unsupervised domain adaptation. FLORS tagger. It treats POS tagging as a window-based (as opposed to sequence classification), multilabel classification problem using LIBLINEAR,8 a linear SVM. A word’s representation consists of four feature vectors: one each for its suffix, its shape and its left and right distributional neighbors. Suffix and shape features are standard features used in the literature; our use of them in FLORS is exactly as described in (Schnabel and Sch¨utze, 2014). Let f(w) be the concatenation of the two distributional and suffix and shape vectors of word w. Then FLORS represents token vi as follows: f(vi−2) ⊕f(vi−1) ⊕f(vi) ⊕f(vi+1) ⊕f(vi+2) where ⊕is vector concatenation. Thus, token vi is tagged based on a 5-word window. FLORS is trained on sections 2-21 of Wall Street Journal (WSJ) and evaluate on the development sets of six different target domains: five SANCL (Petrov and McDonald, 2012) domains – newsgroups, weblogs, reviews, answers, emails – and sections 22-23 of WSJ for in-domain testing. Original FLORS mainly depends on distributional features. We insert word’s embedding as the fifth feature vector. All embedding sets (except for 1TON+) are extended to the union vocabulary by MUTUALLEARNING. We test if this additional feature can help this task. Table 5 gives results for some representa7cistern.cis.lmu.de/flors (Yin et al., 2015) 8liblinear.bwaldvogel.de (Fan et al., 2008) tive systems (“baselines”), FLORS with individual embedding sets (“+indiv”) and FLORS with metaembeddings (“+meta”). Following conclusions can be drawn. (i) Not all individual embedding sets are beneficial in this task; e.g., HLBL embeddings make FLORS perform worse in 11 out of 12 cases. (ii) However, in most cases, embeddings improve system performance, which is consistent with prior work on using embeddings for this type of task (Xiao and Guo, 2013; Yang and Eisenstein, 2014; Tsuboi, 2014). (iii) Metaembeddings generally help more than the individual embedding sets, except for SVD (which only performs better in 3 out of 12 cases). 7 Conclusion This work presented four ensemble methods for learning metaembeddings from multiple embedding sets: CONC, SVD, 1TON and 1TON+. Experiments on word similarity and analogy and POS tagging show the high quality of the metaembeddings; e.g., they outperform GloVe and word2vec on analogy. The ensemble methods have the added advantage of increasing vocabulary coverage. We make our metaembeddings available at http://cistern.cis. lmu.de/meta-emb. Acknowledgments We gratefully acknowledge the support of Deutsche Forschungsgemeinschaft (DFG): grant SCHU 2246/8-2. References Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Proceedings of ACL, pages 809–815. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. JMLR, 3:1137–1155. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. JAIR, 49(1-47). Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2013. The expressive power of word embeddings. In ICML Workshop on Deep Learning for Audio, Speech, and Language Processing. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML, pages 160–167. 1359 John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121–2159. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. JMLR, 9:1871–1874. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of WWW, pages 406–414. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of KDD, pages 1406–1414. Felix Hill, KyungHyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio. 2014. Not all neural embeddings are born equal. In NIPS Workshop on Learning Semantics. Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio. 2015a. Embedding word similarity with neural machine translation. In Proceedings of ICLR Workshop. Felix Hill, Roi Reichart, and Anna Korhonen. 2015b. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, pages 665–695. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL, pages 873–882. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of ICML, pages 1188–1196. Yong Luo, Jian Tang, Jun Yan, Chao Xu, and Zheng Chen. 2014. Pre-trained multi-view word embedding using two-side neural network. In Proceedings of AAAI, pages 1982–1988. Minh-Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of CoNLL, volume 104, pages 104–113. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of ICLR Workshop. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6(1):1–28. Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Proceedings of NIPS, pages 1081–1088. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of EMNLP, 12:1532– 1543. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Proceedings of SANCL, volume 59. Pushpendre Rastogi, Benjamin Van Durme, and Raman Arora. 2015. Multiview LSA: Representation learning via generalized CCA. In Proceedings of NAACL, pages 556–566. Tobias Schnabel and Hinrich Sch¨utze. 2014. FLORS: Fast and simple domain adaptation for part-ofspeech tagging. TACL, 2:15–26. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112. Yuta Tsuboi. 2014. Neural networks leverage corpuswide information for part-of-speech tagging. In Proceedings of EMNLP, pages 938–950. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL, pages 384–394. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. TACL, 3:345–358. Min Xiao and Yuhong Guo. 2013. Domain adaptation for sequence labeling tasks with a probabilistic language adaptation model. In Proceedings of ICML, pages 293–301. Yi Yang and Jacob Eisenstein. 2014. Unsupervised domain adaptation with feature embeddings. In Proceedings of ICLR Workshop. Wenpeng Yin and Hinrich Sch¨utze. 2015. Multichannel variable-size convolution for sentence classification. In Proceedings of CoNLL, pages 204–214. Wenpeng Yin, Tobias Schnabel, and Hinrich Sch¨utze. 2015. Online updating of word representations for part-of-speech taggging. In Proceedings of EMNLP, pages 1329–1334. Ye Zhang, Stephen Roller, and Byron Wallace. 2016. MGNC-CNN: A simple approach to exploiting multiple word embeddings for sentence classification. In Proceedings of NAACL-HLT. 1360
2016
128
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1361–1371, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Towards Constructing Sports News from Live Text Commentary Jianmin Zhang Jin-ge Yao Xiaojun Wan Institute of Computer Science and Technology, Peking University, Beijing 100871, China Key Laboratory of Computational Linguistic (Peking University), MOE, China {zhangjianmin2015, yaojinge, wanxiaojun}@pku.edu.cn Abstract In this paper, we investigate the possibility to automatically generate sports news from live text commentary scripts. As a preliminary study, we treat this task as a special kind of document summarization based on sentence extraction. We formulate the task in a supervised learning to rank framework, utilizing both traditional sentence features for generic document summarization and novelly designed task-specific features. To tackle the problem of local redundancy, we also propose a probabilistic sentence selection algorithm. Experiments on our collected data from football live commentary scripts and corresponding sports news demonstrate the feasibility of this task. Evaluation results show that our methods are indeed appropriate for this task, outperforming several baseline methods in different aspects. 1 Introduction There are a huge number of sports games played each day. It is demanding and challenging to write corresponding news reports instantly after various games. Meanwhile, live text commentary services are available on the web and becoming increasingly popular for sports fans who do not have access to live video streams due to copyright reasons. Some people may also prefer live texts on portable devices. The emergence of live texts has produced huge amount of text commentary data. To the best of our knowledge, there exists few studies about utilizing this rich data source. Manually written sports news for match report usually share the same information and vocabulary as live texts for the corresponding sports game. Sports news and commentary texts can be treated as two different sources of descriptions for the same sports events. It is tempting to investigate whether we can utilize the huge amount of live texts to automatically construct sports news, typically in a form of match report. Building such a system will largely relax the burden of sports news editors, making them free from repetitive tedious efforts for writing while producing sports news more efficiently. In this work, we study the possibility to construct sports news in the form of match reports from given live text commentary scripts. As a concrete example we collect live text data and corresponding news reports for football (called soccer more often in the United States) games and conduct our study thereby. However, our methods and discussions made in this paper can be trivially adapted to other types of sports games as well. As a preliminary study, we treat this task as a special kind of document summarization: extracting sentences from live texts to form a match report as generated news. However, generating sports news from live texts is still challenging due to some unique properties of live text commentary scripts. For almost every minute of the game there are usually several sentences describing various kinds of events. Texts are ordered and organized by the timeline, without apparent highlights for many important events 1. Descriptions are usually in short sentences, which is not helpful for sentence scoring and selection in general. The commentators may tend to use similar, repeated words describing the same type of key events, which may bring additional challenges to traditional summarization methods that are designed to avoid literal repetitions in nature. As a result, naively treating the task as an ordinary document summarization 1Some live texts services may use different textual format for scoring events, which is not enough for our more general purposes. 1361 problem can hardly lead to the construction of reasonable sports news reports. To overcome these difficulties, we explore some specific features of live text commentary scripts and formulate a system based on supervised learning to rank models for this task. In order to tackle the local redundancy issue, we also propose a probabilistic sentence selection strategy. We summarize our contributions as follows: • We originally study the task of sports news construction from live text commentary and we build datasets for supervised learning and evaluation for this task. • We formulate the task in a learning to rank framework, utilizing both traditional features for document summarization and novel taskspecific features during supervised learning. • We propose a probabilistic sentence selection algorithm to address the issue of local redundancy in description. • We conduct a series of experiments on a real dataset and the evaluation results verify the performance of our system. Results suggest that constructing sports news from live texts is feasible and our proposed methods can outperform a few strong baselines. 2 Problem Statement 2.1 Task Description In this work, we treat the task of constructing sports news from live text commentary as a special kind of document summarization: extracting sentences from live text scripts to form a match report. Formally, given a piece of live text commentary containing a collection of candidate sentences S = {s1, s2, . . . , sn} describing a particular sports game G, we need to extract sentences to form a summary of G which are suitable to be formed as sports news. The total length should not exceed a pre-specified length budget B. The overall framework of generic document summarization can still be retained for this preliminary study. We first rank all candidate sentences according to a sentence scoring scheme and then select a few sentences according to certain criteria to form the final generated news. 2.2 Data Collection To the best of our knowledge, there does not exist off-the-shelf datasets for evaluating sports news construction. Therefore we have to build a new dataset for this study. We will focus on live text scripts for football (soccer) games as a concrete instance, since football live texts are the easiest to collect. Note that the methods and discussions described in this paper can trivially generalize to other types of sports games. Meanwhile, live text commentary services are extremely popular in China, where sports fans in many cases do not have access to live video streams due to copyright reasons. The most influential football live services are Sina Sports Live 2 and 163 Football Live 3. For evaluation purposes we need to simultaneously collect both live texts and news texts describing the same sports games. Due to the convenience and availability of parallel data collection, we build our dataset from Chinese websites. For most football games, there exist both live text scripts recorded after the games and human-written news reports on both Sina Sports and 163 Football. We crawl live text commentary scripts for 150 football matches on Sina Sports Live. Figure 1 displays an example of the format of the live texts, containing the main commentary text along with information of the current timeline and scoreline. 莱万多夫斯基右路传球给到穆勒 (Lewandowski passes the ball to the right and finds Müller) 上半场 42' (first half 42') 2-0 穆勒停球后直接射门 (Müller stops the ball and gets a direct shot) 上半场 43' (first half 43') 2-0 切赫这边反应很快将球托出横梁 (Fast reaction from Cech to tip the ball over the bar) 上半场 43' (first half 43') 2-0 Text Commentary Scoreline Timeline Figure 1: Illustration of the live text format For every match, two different corresponding sports news reports are collected from Sina Sports Live and 163 Football Matches Live, respectively. These news reports are manually written by professional editors and therefore suitable to be treated as gold-standard news for our task. The average number of sentences in the live texts for one match is around 242, containing around 4,590 Chinese characters for that match. The gold-standard news reports contain 1,185 Chinese characters on average, forming around 32 sentences. For both the gold-standard news and live text commentary scripts, we split them into sentences 2http://match.sports.sina.com.cn/ 3http://goal.sports.163.com/ 1362 and then use a Chinese word segmentation tool 4 to segment the sentences into word sequences. For each sentence, we compute its TFIDF vector for calculating literal cosine similarity when used. 3 Constructing Sports News via Sentence Extraction We build a system to automatically construct match reports from live text commentary. Since we have described the new challenges for this task, we may design a number of relevant features to address them. In this work, we cast the problem into supervised sentence extraction. Supervised approaches, especially those based on learning to rank (LTR), can better utilize the power of various task-dependent features (Shen and Li, 2011; Wang et al., 2013). For a given specific sports game, we extract features from all candidate sentences in the corresponding live texts and score the sentences using a learning to rank (LTR) model learned from the training data (Section 3.1). Then we select a few of them according to the ranking scores to form the constructed news (Section 3.3). 3.1 Training Data Format Supervised sentence scoring models based on LTR require input training data in the format of (xi, yi) for each candidate sentence si, where xi is the feature vector and yi is the preference score. The feature vector x is described in Section 3.2. The score y will be defined to reflect the importance, or the tendency to be included in the final news report, of the candidate sentence. In this work we first calculate a group of ROUGE-2 F-scores (cf. Section 4.4.1) of the candidate sentence, treating each sentence in the gold-standard news as reference. The score y of the candidate sentence is then set to be the maximum among those ROUGE-2 Fscores. Later we will see that this scores can indeed serve as good learning targets. 3.2 Features In this work, we extract both common features which have been widely used for generic document summarization (Shen and Li, 2011; Wang et al., 2013) and novel task-specific features aiming at proper sports news generation from live broadcast script. The features are described as follows. 4We use the ICTCLAS toolkit for word segmentation in this work: http://ictclas.nlpir.org/ 3.2.1 Basic Features Position: The position of each candidate sentence. Suppose there are n sentences in a document. For the i-th sentence, its position feature is computed as 1 −i−1 n . Length: The number of words contained in the sentence after stopwords removal. Number of stopwords: The Number of stopwords contained in each sentence. Sentences with many stopwords should be treated as less important candidates. Sum of word weights: The sum of TF-IDF weights for each word in a sentence. Similarity to the Neighboring Sentences: We calculate the average cosine similarity of a candidate sentence to its previous N and the next N neighboring sentences. We set N as 1 and 2 here to get two different features. 3.2.2 Task-specific Features The task we study has some unique properties compared with generic document summarization. For instance, in live text commentary for sports games such as football matches, the scripts not only contain descriptive texts but also the scoreline and timeline information. Such information can be utilized to judge the quality of candidate sentences as well. We extract a rich set of new features, which can be grouped into four types: Explicit highlight markers: Explicit highlight marker words in a sentence are usually good indicators for its importance. Sentences with more marker words are more probable to be extracted and contained in news or reports for the games. For example, words such as “破门(scores)” and “红牌(red card)” in a sentence may indicate that the sentence is describing important events and will be more likely to be extracted. We collect a short list of 25 explicit highlight marker words 5. For each marker word we create a binary feature to denote the presence or absence of that markers in each candidate sentence. We also use the number of markers as one feature, with the intuition that containing more marker words typically suggests more important sentences. Scoreline features: An audience of sports games typically pays more attention on scoreline changes, especially those deadlock-breaking scores that break the game from ties. We use three 5We include the full list of marker words in the supplementary materials due to the space limit. 1363 binary features to describe the scoreline information of each candidate sentence: • An indicator feature on whether there was a change of scoreline when the narrator or commentator was producing that sentence. • An indicator feature on whether the distance between the candidate sentence and the previous closest sentence with a change of scoreline is less than or equal to 5. • An indicator feature showing whether the game was a draw or not at that time. To better describe these features we give an example in Figure 2, where S1-S3 corresponds to the above three binary features, respectively. Text Commentary Timeline Scoreline S1 S2 S3 Both sides take advantages of counter attacks. 32' 1-1 0 0 0 1-2!! 33' 1-2 1 1 1 Alexis!! 33' 1-2 0 1 1 Özil finds the teammate byline followed by a low cross to far post, Alexis sends the ball into the net! 34' 1-2 0 1 1 Leicester players are unhappy. 34' 1-2 0 1 1 Figure 2: An example of scoreline features Timeline features: The timestamp on each sentence can reflect the progress of a sports game. We divide a match into five different stages as “未赛(not started)”, “上半场(first half)”, “中场 休息(half-time)”, “下半场(second half)” and “完 赛(full-time)”. Then we use five binary features to represent whether the sentence was describing a specific stage. We also use the specific time-stamp (in integral minutes) of the candidate sentence in the match as an additional feature. Suppose there are n minutes of the match (typically 90 minutes for football), for sentences on the time-stamp of the i-th minute , this feature is computed as i n. Player popularity: Sports fans usually focus more on the performance of the star players or inform players during the games. We design two features to utilize player information described in a candidate sentence: the number of players contained in the sentence and the sum of their popularity measurements. In this work the popularity of a player is measured using search engines for news: we use the name of a certain player as input query to Baidu News 6, and use the number of recent news retrieved to measure this player’s popularity. 6http://news.baidu.com/ 3.3 Sentence Selection Once we have the trained LTR model, we can immediately construct news reports by selecting sentences with the highest scores. Unfortunately this simple strategy will suffer from redundancy in commentary, since the LTR scores are predicted independently for each sentence and assigning high scores for repeated commentary texts describing the same key event. Therefore, special care is needed in sentence selection. In principle, any In this work we propose a probabilistic approach based on determinantal point processes (Kulesza and Taskar, 2012, DPPs). This approach can naturally integrate the predicted scores from the LTR model while trying to avoid certain redundancy by producing more diverse extractions 7. We first review some background knowledge on the model. More details can be found in the comprehensive survey (Kulesza and Taskar, 2012) covering this topic. 3.3.1 Determinantal Point Processes Determinantal point processes (DPPs) are distributions over subsets that jointly prefer quality of each item and diversity of the whole subset. Formally, a DPP is a probability measure defined on all possible subsets of a group of items Y = {1, 2, . . . , N}. For every Y ⊆Y we have: P(Y ) = det(LY ) det(L + I) where L is a positive semidefinite matrix typically called an L-ensemble. LY ≡[Lij]i,j∈Y denotes the restriction of L to the entries indexed by elements of Y , and det(L∅) = 1. The term det(L + I) is the normalization constant which has a succinct closed-form and easy to compute. We can define the entries of L as follows: Lij = qiφ⊤ i φjqj = qi · sim(i, j) · qj (1) where we can think of qi ∈R+ as the quality of an item i and φi ∈Rn with ∥φi∥2 = 1 denotes a normalised feature vector such that sim(i, j) ∈ [−1, 1] measures similarity between item i and item j. This simple definition gives rise to a distribution that places most of its mass on sets that are both high quality and diverse. This is intuitive in a 7Many other approaches can also be used to achieve similar effect, such as submodular maximization (Lin and Bilmes, 2010). We leave the comparison with these alternatives for future work study. 1364 geometric sense since determinants are closely related to volumes; in particular, det(LY ) is proportional to the volume spanned by the vectors qiφi for i ∈Y . Thus, item sets with both high-quality and diverse items will have the highest probability (Figure 3). (a) (b) (c) Figure 3: (a) The DPP probability of a set Y depends on the volume spanned by vectors qiφi for i ∈Y (b) As length increases, so does volume. (c) As similarity increases, volume decreases. 3.3.2 Sentence Selection In this work we formulate the sentence selection problem as maximum a posteriori (MAP) inference for DPPs, i.e. finding argmaxY log det(LY ). It is known that MAP inference for DPPs is NPhard (Gillenwater et al., 2012). Therefore we adopt the greedy approximate inference procedure used by Kulesza and Taskar (2011) which is fast and performs reasonably well in practice. The remaining question is how to define the Lensemble matrix L, or equivalently how to define itemwise quality qi and pairwise similarity sim(i, j), where each item corresponds to a candidate sentence. Since we have predicted scores for all candidates with the LTR model, we simply set qi to be the ranking score for sentence i. The definition of sim(i, j) is more subtle since it directly address specific types of redundancy. The most straightforward definition is to use literal cosine similarity. This is used for traditional summarization problems (Kulesza and Taskar, 2011). However, the problem for constructing sports news from live broadcast script is rather different. A live broadcast script may use literally similar sentences to describe similar types of events happened at different time stamps. Simply removing sentences that are similar in content may become harmful to the preservation of important events 8. One typical redundancy that we found in this study is local description redundancy. In live texts, 8Using cosine similarity for all similarity-dependent methods performs poorly in our experiments. Therefore we will not discuss cosine similarity in more details later. an important event (such as goals) may be stressed multiple times consecutively by the commentator. Therefore in this study we use local literal similarity as a first attempt. Formally, the pairwise similarity is defined as: sim(i, j) =  0, if max{|ip −jp|, |it −jt|} > 1, cos(i, j), otherwise, where the subscripts ip and it denotes position and timestamp for sentence i, respectively. In other words we treat sentences written consecutively within one minute as local descriptions and only calculate literal cosine similarity for them. 4 Experimental Setup 4.1 Data Preparation As described earlier in Section 2.2, we evaluate the performance of different systems on our collected dataset. To utilize the dataset more sufficiently and draw more reliable conclusions, we perform crossvalidation during evaluation. Specifically, we randomly divide the dataset into three parts with equal sizes, i.e. each has 50 pairs of live texts and goldstandard news. Each time we set one of them as the test set and use the remaining two parts for training and validation. We will mainly report the averaged results from all three folds. For unsupervised baselines the results are calculated similarly via averaging the performance on the test set. 4.2 Learning to Rank For predicting ranking scores we use the Random Forest (RF) (Breiman, 2001) ensemble ranker of LambdaMart (Wu et al., 2010), implemented in RankLib 9. We set the number of iterations to 300 and the sampling rate to 0.3. Using different values did not show real differences. 4.3 Compared Baseline Methods Our system is compared with several baselines, typically traditional summarization approaches: HeadTail: Using head and tail sentences only. Commentators usually describe some basic information of the two sides at the beginning and summarize the scoring events in the end of commentary. This baseline resembles the baseline of leading sentences for traditional summarization. 9http://sourceforge.net/p/lemur/wiki/RankLib/; In preliminary experiments, we contrasted RF with support vector regression predictor as well as other pairwise and listwise LTR models. We found that RF consistently outperformed others. 1365 Centroid: In centroid-based summarization (Radev et al., 2000), a pseudo-sentence of the document called centroid is calculated. The centroid consists of words with TFIDF scores above a predefined threshold. The score of each sentence is defined by summing the scores based on different features including cosine similarity of sentences with the centroid, position weight and cosine similarity with the first sentence. LexRank: LexRank (Erkan and Radev, 2004) computes sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. ILP: Integer linear programming (ILP) approaches (Gillick et al., 2008) cast document summarization as combinatorial optimization. An ILP model selects sentences by maximizing the sum of frequency-induced weights of bigram concepts 10 contained in the summary. Highlight: This method is designed to show the effect of using merely the explicit highlight markers described in Section 3.2.2. The importance of a sentence is represented by the number of highlight markers it includes. For fair comparisons the length of each constructed news report is limited to be no more than 1,000 Chinese characters, roughly the same with the average length of the gold-standard news. Note that we do not use the traditional MMR redundancy removal algorithm based on literal similarity (Carbonell and Goldstein, 1998) since we find only ignorable differences between using MMR or not for all systems. 4.4 Evaluation Methods and Metrics 4.4.1 Automatic Evaluation Similar to the evaluation for traditional summarization tasks, we use the ROUGE metrics (Lin and Hovy, 2003) to automatically evaluate the quality of produced summaries given the goldstandard reference news. The ROUGE metrics measure summary quality by counting the precision, recall and F-score of overlapping units, such as n-grams and skip grams, between a candidate summary and the reference summaries. We use the ROUGE-1.5.5 toolkit to perform the 10We also tried words rather than bigrams but found slightly worse performance. evaluation. In this paper we report the F-scores of the following metrics in the experimental results: ROUGE-1 (unigram-based), ROUGE-2 (bigrambased) and ROUGE-SU4 (based on skip bigrams with a maximum skip distance of 4). 4.4.2 Pyramid Evaluation We also conduct manual pyramid evaluation in this study. Specifically, we use the modified pyramid scores as described in (Passonneau et al., 2005) to manually evaluate the summaries generated by different methods. We randomly sample 20 games from the data set and manually annotate facts on the gold-standard news. The annotated facts are mostly describing specific events happened during the game, e.g. “伊万被黄牌警告” (Ivanovic is shown the yellow card) and “内马尔 开出角球” (Neymar takes the corner). Each fact is treated as a Summarization Content Unit, (SCU) (Nenkova and Passonneau, 2004). The number of occurrences for each SCU in the gold-standard news is regarded as the weight of this SCU. 5 Results and Analysis 5.1 Comparison with Baseline Methods The average performance on all three folds of different methods are displayed in Table 1. Method R-1 R-2 R-SU4 HeadTail 0.30147 0.07779 0.10336 Centroid 0.32508 0.08113 0.11245 LexRank 0.31284 0.06159 0.09376 ILP 0.32552 0.07285 0.10378 Highlight 0.34687 0.08748 0.11924 RF 0.38559 0.11887 0.14907 RF+DPP 0.39391 0.11986 0.15097 Table 1: Comparison results of different methods As we can see from the results, our learning to rank approach based on RF achieves significantly (< 0.01 significance level for pairwiset testing) better results compared with traditional unsupervised summarization approaches 11. The ILP model, which is believed to be suitable for multi-document summarization, did not perform well in our settings. Head and tail sentences are informative but merely using them lacks specific descriptions for procedural events, therefore not 11We also conducted experiments on using our proposed features to calculate LexRank, but did not observe real difference compared with normal LexRank. This suggest that the performance gain comes from supervised learning to rank approach, not merely from the features. 1366 providing competitive results either. The comparison between RF and RF+DPP shows the effectiveness of our sentence selection strategy. However, the increase is still limited 12. This may become reasonable later when we discuss more about the errors from our systems. Merely using highlight markers to construct news also provides competitive results, but inferior to supervised models. This suggests that the highlight marker features are relatively strong indicators for good sentences while merely using these features may not be sufficient. Table 2 shows the average pyramid scores for the systems in comparison. The “Gold-standard” row denotes manually written news report and is listed for reference. We can see our learning to rank systems based on RF constructs news with the highest pyramid scores. Method Pyramid scores HeadTail 0.13657 Centroid 0.30663 LexRank 0.28756 ILP 0.20867 Highlight 0.41121 RF 0.53766 RF+DPP 0.62500 Gold-standard 0.88329 Table 2: Average Pyramid scores Overall, the experimental results indicate that our system can generate much better news than the baselines in both automatic and manual evaluations. We include examples of our constructed news reports in the supplementary materials. 5.2 Feature Validation Different groups of features may play different roles in the LTR models. In order to validate the impact of both the traditional features and the novel task-specific features, we conduct experiments with different combinations by removing each group of features respectively. Table 3 shows the results, with “w/o” denotes experiments without the corresponding group of features. Method R-1 R-2 R-SU4 RF 0.38559 0.11887 0.14907 RF-w/o novel 0.37297 0.10964 0.14021 RF-w/o trad. 0.36314 0.09910 0.13102 Table 3: Results of feature validation 12Significance level < 0.05 for pairwise-t testing only for ROUGE-1. We can observe that both the traditional features and the novel features contribute useful information for learning to rank models. Due to the nature of the sentence extraction approach, features designed for traditional document summarization are still playing an indispensable role for our task, although they might be important in this work for different reasons. For example, position features are indicative for traditional summarization since sentences appearing in the very beginning or the end are more probable as summarizing sentences. For sports commentary, positions are closely related to timeline in a more coarse fashion. Certain types of key events, for example player substitutions and even scores, may tend to happen in certain period in a game rather than uniformly spread out in every minute. 5.3 Room for Improvements 5.3.1 Upper Bounds To get a rough estimate of what is actually achievable in terms of the final ROUGE scores, we looked at different “upper bounds” under various scenarios (Table 4). We first evaluate one reference news with the other reference news served as the gold-standard result. The results are given in the row labeled reference of Table 4. This provides a reasonable estimate of human performance. Second, in sentence extraction we restrict the constructed news to sentences from the original commentary texts themselves. We use the greedy algorithm to extract sentences that maximize ROUGE-2F scores. The resulting performance is given in the row extract of Table 4. We observe numerically superior scores compared with reference. This is not strange since we are intentionally optimizing ROUGE scores. And also this suggests that the sentence extraction approach for sports news construction is rather reasonable, in terms of information overlap. Method R-1 R-2 R-SU4 reference 0.44725 0.15265 0.18064 extract 0.43270 0.16872 0.18622 target 0.40987 0.15901 0.17941 target+DPP 0.41536 0.15994 0.18232 RF+DPP 0.39391 0.11986 0.15097 Table 4: Upper bounds on ROUGE scores Third, we use the partial ROUGE-2 values, i.e. the targets used to train LTR models (cf. Section 3.1) for greedy selection and DPP selection, with results listed in the row target and tar1367 Time Live Text Commentary Script 55 内马尔 为 巴 萨 制造 了 一个 位置 不错 的 定位球 Neymar wins a free kick in a good position. 56 内马尔 ~ ~ ~ Neymar!!!!!!! 56 内马尔 的 定位球 直接 打入 了 球门 左上 角 的 死角 ! ! ! 门将 无能为力 The free kick from Neymar goes directly into the top left corner! The keeper can do nothing. 56 球 在 飞 向 球门 的 过程 中 下坠 速度 非常 快 The ball drops quickly and flies to the goal. Figure 4: Case I: short and noisy sentences Time Live Text Commentary Script FT 切尔西中场快发任意球,科斯塔打进全场唯一入球!!! From a quick free kick from Chelsea, Costa scored the only goal of the game!!! FT 整场比赛切尔西占据了主动,可面对诺维奇的铁桶阵,办法不多 Chelsea dominated the game but found it difficult against Norwich’s defense. FT 下半场利用对方的一次疏忽,阿扎尔中场被放倒,威廉快发任意球,科斯塔完成致 命一击 From an error from the opponent, Hazard was fouled. Willian launches a quick free kick and assists Costa for the lethal strike. Figure 5: Case II: summarizing sentences get+DPP of Table 4. This validates that using partial ROUGE-2 as the training target for LTR models is somewhat reasonable for this study. 5.3.2 Error Analysis In this preliminary study, we use LTR models and probabilistic sentence selection procedure. While reasonable performance has been achieved, there exist certain types of errors as we found in the constructed news results. Error I: First, sentences in live commentary are mostly short, and sometimes noisy. Sometimes an important event has been described using a number of consecutive short sentences. Our LTR models failed to generate high scores for such sentences and therefore will cause some lack of information. Figure 4 illustrates an example of this type of error in the constructed news report. All the sentences are describing a key scoring event. However, none of them were selected to construct the news because our LTR model assigns low scores for these short sentences. Meanwhile the second sentence can be treated as noisy. Error II: Second, commentators are likely to summarize important events during the game, not at the point when the event happens. Our sentence selection algorithm can only address local redundancy, while this issue is more global. Figure 5 illustrates an example of this case in the constructed news report. The only goal of the match is described during full-time (FT). Our method redundantly included this in the final constructed news even it had already selected that event. These two issues are highly non-trivial and have not been well addressed in the method we explored in this paper. We leave them for further study in the future. 5.3.3 Readability Assessment In this work we only consider sentence extraction. Unlike traditional summarization tasks, sports commentary texts are describing a different specific action in almost every sentence. Descriptive coherence becomes a more difficult challenge in this scenario. We conduct manual evaluation on systems in comparison along with manually written news reports (gold-standard). Three volunteers who are fluent in Chinese were asked to perform manual ratings on three factors: coherence (Coh.), non-redundancy (NR) and overall readability (Read.). The ratings are in the format of 1-5 numerical scores (not necessarily integral), with higher scores denote better quality. The results are shown in Table 5. Method Coh. NR Read. HeadTail 3.47 3.07 3.56 Centroid 2.87 3.72 2.66 LexRank 2.90 3.23 2.43 ILP 2.87 3.23 2.50 Highlight 3.36 3.72 3.06 RF 3.23 3.64 3.13 RF+DPP 3.23 3.87 3.06 Gold-Standard 4.67 4.23 4.77 Table 5: Manual readability ratings The differences between systems in terms of readability factors are not as large as information coverage suggested by ROUGE metrics and pyramid scores. Meanwhile, while we can observe that our approaches outperforms the unsupervised extractive summarization approaches in coherence and readability for certain level, the results also clearly suggest that there still exists large room for improvements in terms of the readability factors. 6 Discussions The general challenges for the particular task of sports news generation are mostly addressed in those designed features in the learning to rank framework. We utilize the timeline and scoreline information, while also keep traditional features such as sentence length. Experimental results show that our framework indeed outperforms strong traditional summarization baselines, while still having much room for improvement. We might also notice that there may exist some issues if merely using automatic metrics to evaluate the overall quality of the generated news reports. The ROUGE metrics are mainly based on 1368 ngram overlaps. For sports texts most of the proportions are dominated by proper names, certain types of actions or key events, etc. Compared with traditional summarization tasks, it might be easier to achieve high ROUGE scores with an emphasize on selecting important entities. In our experiments, methods with higher ROUGE scores can indeed achieve better coverage of important units such as events, as shown in pyramid scores in Table 2. However, we can also observe from Table 5 that automatic metrics currently cannot reflect readability factors very well. Generally speaking, while big difference in ROUGE may suggest big difference in overall quality, smaller ROUGE differences may not be that indicative enough. Therefore, it is interesting to find alternative automatic metrics in order to better reflect the general quality for this task. 7 Related Work To the best of our knowledge, generation of sports news from live text commentary is not a wellstudied task in related fields. One related study focused on generating textual summaries for sports events from status updates in Twitter (Nichols et al., 2012). There also exists earlier work on generation of sports highlight frames from sports videos, focusing on a very different type of data (Tjondronegoro et al., 2004). Bouayad-Agha et al. (2011) and Bouayad-Agha et al. (2012) constructed an ontology-based knowledge base for the generation of football summaries, using predefined extraction templates. Our task is closely related to document summarization, which has been studied quite intensively. Various approaches exist to challenge the document summarization task, including centroidbased methods, link analysis and graph-based algorithms (Erkan and Radev, 2004; Wan et al., 2007), combinatorial optimization techniques such as integer linear programming (Gillick et al., 2008) and submodular optimization (Lin and Bilmes, 2010). Supervised models including learning to rank models (Metzler and Kanungo, 2008; Shen and Li, 2011; Wang et al., 2013) and regression (Ouyang et al., 2007; Galanis and Malakasiotis, 2008; Hong and Nenkova, 2014) have also been adapted in the scenario of document summarization. Since sports live texts contain timeline information, summarization paradigms that utilize timeline and temporal information (Yan et al., 2011; Ng et al., 2014; Li et al., 2015) are also conceptually related. Supervised approaches related to this work have also been applied for timeline summarization, including linear regression for important scores (Tran et al., 2013a) and learning to rank models (Tran et al., 2013b). In this preliminary work we only use the timestamps in the definition of similarity for sentence selection. More crafted usages will be explored in the future. 8 Conclusion and Future Work In this paper we study a challenging task to automatically construct sports news from live text commentary. Using football live texts as an instance, we collect training data jointly from live text commentary services and sports news portals. We develop a system based on learning to rank models, with several novel task-specific features. To generate the final news summary and tackle the local redundancy problem, we also propose a probabilistic sentence selection method. Experimental results demostrate that this task is feasible and our proposed methods are appropriate. As a preliminary work, we only perform sentence extraction in this work. Since sports news and live commentary are in different genres, some post-editing rewritings will make the system generating more natural descriptions for sports news. We would like to extend our system to produce sports news beyond pure sentence extraction. Another important direction is to focus on the construction of datasets in larger scale. One feasible approach is to use a speech recognition system on live videos or broadcasts of sports games to collect huge amount of transcripts as our raw data source. Although more data can be easily collected in this case, the noisiness of audio transcripts may bring some additional challenges, therefore worthwhile for further study. Acknowledgments This work was supported by National Natural Science Foundation of China (61331011), National Hi-Tech Research and Development Program (863 Program) of China (2015AA015403) and IBM Global Faculty Award Program. We thank the anonymous reviewers for helpful comments and Kui Xu from our group for his help in calculating player popularity features. Xiaojun Wan is the corresponding author of this paper. 1369 References Nadjet Bouayad-Agha, Gerard Casamayor, and Leo Wanner. 2011. Content selection from an ontologybased knowledge base for the generation of football summaries. In Proceedings of the 13th European Workshop on Natural Language Generation, pages 72–81. Association for Computational Linguistics. Nadjet Bouayad-Agha, Gerard Casamayor, Simon Mille, and Leo Wanner. 2012. Perspective-oriented generation of football match summaries: Old tasks, new challenges. ACM Transactions on Speech and Language Processing (TSLP), 9(2):3. Leo Breiman. 2001. Random forests. Machine learning, 45(1):5–32. Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335–336. ACM. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, pages 457–479. Dimitrios Galanis and Prodromos Malakasiotis. 2008. Aueb at tac 2008. In Proceedings of the TAC 2008 Workshop. Jennifer Gillenwater, Alex Kulesza, and Ben Taskar. 2012. Near-optimal map inference for determinantal point processes. In Advances in Neural Information Processing Systems, pages 2735–2743. Dan Gillick, Benoit Favre, and Dilek Hakkani-Tur. 2008. The icsi summarization system at tac 2008. In Proceedings of the Text Understanding Conference. Kai Hong and Ani Nenkova. 2014. Improving the estimation of word importance for news multidocument summarization. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 712– 721, Gothenburg, Sweden, April. Association for Computational Linguistics. Alex Kulesza and Ben Taskar. 2011. Learning determinantal point processes. In UAI. Alex Kulesza and Ben Taskar. 2012. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 5(2–3). Chen Li, Yang Liu, and Lin Zhao. 2015. Improving update summarization via supervised ilp and sentence reranking. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1317–1322, Denver, Colorado, May–June. Association for Computational Linguistics. Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 912–920. Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 71–78. Association for Computational Linguistics. Donald Metzler and Tapas Kanungo. 2008. Machine learned sentence selection strategies for querybiased summarization. In SIGIR Learning to Rank Workshop, pages 40–47. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. Jun-Ping Ng, Yan Chen, Min-Yen Kan, and Zhoujun Li. 2014. Exploiting timelines to enhance multidocument summarization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 923–933, Baltimore, Maryland, June. Association for Computational Linguistics. Jeffrey Nichols, Jalal Mahmud, and Clemens Drews. 2012. Summarizing sporting events using twitter. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces, pages 189–198. ACM. You Ouyang, Sujian Li, and Wenjie Li. 2007. Developing learning strategies for topic-based summarization. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 79–86. ACM. Rebecca J Passonneau, Ani Nenkova, Kathleen McKeown, and Sergey Sigelman. 2005. Applying the pyramid method in duc 2005. In Proceedings of the Document Understanding Conference (DUC 05), Vancouver, BC, Canada. Dragomir R Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies. In Proceedings of the 2000 NAACL-ANLP Workshop on Automatic summarization, pages 21–30. Association for Computational Linguistics. Chao Shen and Tao Li. 2011. Learning to rank for query-focused multi-document summarization. In Data Mining (ICDM), 2011 IEEE 11th International Conference on, pages 626–634. IEEE. 1370 Dian Tjondronegoro, Yi-Ping Phoebe Chen, and Binh Pham. 2004. Integrating highlights for more complete sports video summarization. IEEE multimedia, 11(4):22–37. Giang Binh Tran, Mohammad Alrifai, and Dat Quoc Nguyen. 2013a. Predicting relevant news events for timeline summaries. In Proceedings of the 22nd international conference on World Wide Web Companion, pages 91–92. International World Wide Web Conferences Steering Committee. Giang Binh Tran, Tuan A Tran, Nam-Khanh Tran, Mohammad Alrifai, and Nattiya Kanhabua. 2013b. Leveraging learning to rank in an optimization framework for timeline summarization. In SIGIR 2013 Workshop on Time-aware Information Access (TAIA. Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Manifold-ranking based topic-focused multidocument summarization. In IJCAI, volume 7, pages 2903–2908. Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Florian, and Claire Cardie. 2013. A sentence compression based framework to query-focused multidocument summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1384–1394, Sofia, Bulgaria, August. Association for Computational Linguistics. Qiang Wu, Christopher JC Burges, Krysta M Svore, and Jianfeng Gao. 2010. Adapting boosting for information retrieval measures. Information Retrieval, 13(3):254–270. Rui Yan, Liang Kong, Congrui Huang, Xiaojun Wan, Xiaoming Li, and Yan Zhang. 2011. Timeline generation through evolutionary trans-temporal summarization. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 433–443, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. 1371
2016
129
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 130–139, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning Yulia Tsvetkov♠Manaal Faruqui♠Wang Ling♣Brian MacWhinney♠Chris Dyer♣♠ ♠Carnegie Mellon University ♣Google DeepMind {ytsvetko,mfaruqui,cdyer}@cs.cmu.edu, [email protected], [email protected] Abstract We use Bayesian optimization to learn curricula for word representation learning, optimizing performance on downstream tasks that depend on the learned representations as features. The curricula are modeled by a linear ranking function which is the scalar product of a learned weight vector and an engineered feature vector that characterizes the different aspects of the complexity of each instance in the training corpus. We show that learning the curriculum improves performance on a variety of downstream tasks over random orders and in comparison to the natural corpus order. 1 Introduction It is well established that in language acquisition, there are robust patterns in the order by which phenomena are acquired. For example, prototypical concepts are acquired earlier; concrete words tend to be learned before abstract ones (Rosch, 1978). The acquisition of lexical knowledge in artificial systems proceeds differently. In general, models will improve during the course of parameter learning, but the time course of acquisition is not generally studied beyond generalization error as a function of training time or data size. We revisit this issue of choosing the order of learning—curriculum learning—framing it as an optimization problem so that a rich array of factors—including nuanced measures of difficulty, as well as prototypicality and diversity—can be exploited. Prior research focusing on curriculum strategies in NLP is scarce, and has conventionally been following a paradigm of “starting small” (Elman, 1993), i.e., initializing the learner with “simple” examples first, and then gradually increasing data complexity (Bengio et al., 2009; Spitkovsky et al., 2010). In language modeling, this preference for increasing complexity has been realized by curricula that increase the entropy of training data by growing the size of the training vocabulary from frequent to less frequent words (Bengio et al., 2009). In unsupervised grammar induction, an effective curriculum comes from increasing length of training sentences as training progresses (Spitkovsky et al., 2010). These case studies have demonstrated that carefully designed curricula can lead to better results. However, they have relied on heuristics in selecting curricula or have followed the intuitions of human and animal learning (Kail, 1990; Skinner, 1938). Had different heuristics been chosen, the results would have been different. In this paper, we use curriculum learning to create improved word representations. However, rather than testing a small number of curricula, we search for an optimal curriculum using Bayesian optimization. A curriculum is defined to be the ordering of the training instances, in our case it is the ordering of paragraphs in which the representation learning model reads the corpus. We use a linear ranking function to conduct a systematic exploration of interacting factors that affect curricula of representation learning models. We then analyze our findings, and compare them to human intuitions and learning principles. We treat curriculum learning as an outer loop in the process of learning and evaluation of vectorspace representations of words; the iterative procedure is (1) predict a curriculum; (2) train word embeddings; (3) evaluate the embeddings on tasks that use word embeddings as the sole features. Through this model we analyze the impact of curriculum on word representation models and on extrinsic tasks. To quantify curriculum properties, we define three groups of features aimed at analyzing statistical and linguistic content and structure 130 of training data: (1) diversity, (2) simplicity, and (3) prototypicality. A function of these features is computed to score each paragraph in the training data, and the curriculum is determined by sorting corpus paragraphs by the paragraph scores. We detail the model in §2. Word vectors are learned from the sorted corpus, and then evaluated on partof-speech tagging, parsing, named entity recognition, and sentiment analysis (§3). Our experiments confirm that training data curriculum affects model performance, and that models with optimized curriculum consistently outperform baselines trained on shuffled corpora (§4). We analyze our findings in §5. The contributions of this work are twofold. First, this is the first framework that formulates curriculum learning as an optimization problem, rather then shuffling data or relying on human intuitions. We experiment with optimizing the curriculum of word embeddings, but in principle the curriculum of other models can be optimized in a similar way. Second, to the best of our knowledge, this study is the first to analyze the impact of distributional and linguistic properties of training texts on the quality of task-specific word embeddings. 2 Curriculum Learning Model We are considering the problem of maximizing a performance of an NLP task through sequentially optimizing the curriculum of training data of word vector representations that are used as features in the task. Let X = {x1, x2, . . . , xn} be the training corpus with n lines (sentences or paragraphs). The curriculum of word representations is quantified by scoring each of the paragraphs according to the linear function w⊺φ(X), where φ(X) ∈Rℓ×1 is a real-valued vector containing ℓlinguistic features extracted for each paragraph, and w ∈Rℓ×1 denote the weights learned for these features. The feature values φ(X) are z-normalized across all paragraphs. These scores are used to specify the order of the paragraphs in the corpus—the curriculum: we sort the paragraphs by their scores. After the paragraphs are curriculum-ordered, the reordered training corpus is used to generate word representations. These word representations are then used as features in a subsequent NLP task. We define the objective function eval : X →R, which is the quality estimation metric for this NLP task performed on a held-out dataset (e.g., correlation, accuracy, F1 score, BLEU). Our goal is to define the features φ(X) and to find the optimal weights w that maximize eval. We optimize the feature weights using Bayesian optimization; we detail the model in §2.1. Distributional and linguistic features inspired by prior research in language acquisition and second language learning are described in §2.2. Figure 1 shows the computation flow diagram. Bayesian optimization Feature extraction Paragraph scoring & sorting Extrinsic task training & eval Representation learning Figure 1: Curriculum optimization framework. 2.1 Bayesian Optimization for Curriculum Learning As no assumptions are made regarding the form of eval(w), gradient-based methods cannot be applied, and performing a grid search over parameterizations of w would require a exponentially growing number of parameterizations to be traversed. Thus, we propose to use Bayesian Optimization (BayesOpt) as the means to maximize eval(w). BayesOpt is a methodology to globally optimize expensive, multimodal black-box functions (Shahriari et al., 2016; Bergstra et al., 2011; Snoek et al., 2012). It can be viewed as a sequential approach to performing a regression from high-level model parameters (e.g., learning rate, number of layers in a neural network, and in our model–curriculum weights w) to the loss function or the performance measure (eval). An arbitrary objective function, eval, is treated as a black-box, and BayesOpt uses Bayesian inference to characterize a posterior distribution over functions that approximate eval. This model of eval is called the surrogate model. Then, the BayesOpt exploits this model to make decisions about eval, e.g., where is the expected maximum of the function, and what is the expected improvement that can be obtained over the best iteration so far. The strategy function, estimating the next set 131 of parameters to explore given the current beliefs about eval is called the acquisition function. The surrogate model and the acquisition function are the two key components in the BayesOpt framework; their interaction is shown in Algorithm 1. The surrogate model allows us to cheaply approximate the quality of a set of parameters w without running eval(w), and the acquisition function uses this surrogate to choose a new value of w. However, a trade-off must be made: should the acquisition function move w into a region where the surrogate believes an optimal value will be found, or should it explore regions of the space that reveal more about how eval behaves, perhaps discovering even better values? That is, acquisition functions balance a tradeoff between exploration—by selecting w in the regions where the uncertainty of the surrogate model is high, and exploitation—by querying the regions where the model prediction is high. Popular choices for the surrogate model are Gaussian Processes (Rasmussen, 2006; Snoek et al., 2012, GP), providing convenient and powerful prior distribution on functions, and tree-structured Parzen estimators (Bergstra et al., 2011, TPE), tailored to handle conditional spaces. Choices of the acquisition functions include probability of improvement (Kushner, 1964), expected improvement (EI) (Moˇckus et al., 1978; Jones, 2001), GP upper confidence bound (Srinivas et al., 2010), Thompson sampling (Thompson, 1933), entropy search (Hennig and Schuler, 2012), and dynamic combinations of the above functions (Hoffman et al., 2011); see Shahriari et al. (2016) for an extensive comparison. Yogatama et al. (2015) found that the combination of EI as the acquisition function and TPE as the surrogate model performed favorably in Bayesian optimization of text representations; we follow this choice in our model. 2.2 Distributional and Linguistic Features To characterize and quantify a curriculum, we define three categories of features, focusing on various distributional, syntactic, and semantic aspects of training data. We now detail the feature categories along with motivations for feature selection. DIVERSITY. Diversity measures capture the distributions of types in data. Entropy is the bestknown measure of diversity in statistical research, but there are many others (Tang et al., 2006; Gimpel et al., 2013). Common measures of diversity Algorithm 1 Bayesian optimization 1: H ←∅ ▷Initialize observation history 2: A ←EI ▷Initialize acquisition function 3: S0 ←TPE ▷Initialize surrogate model 4: for t ←1 to T do 5: wt ←argmaxwA(w; St−1, H) ▷Predict wt by optimizing acquisition function 6: eval(wt) ▷Evaluate wt on extrinsic task 7: H ←H∪(wt, eval(wt)) ▷Update observation history 8: Estimate St given H 9: end for 10: return H are used in many contrasting fields, from ecology and biology (Rosenzweig, 1995; Magurran, 2013), to economics and social studies (Stirling, 2007). Diversity has been shown effective in related research on curriculum learning in language modeling, vision, and multimedia analysis (Bengio et al., 2009; Jiang et al., 2014). Let pi and pj correspond to empirical frequencies of word types ti and tj in the training data. Let dij correspond to their semantic similarity, calculated as the cosine similarity between embeddings of ti and tj learned from the training data. We annotate each paragraph with the following diversity features: • Number of word types: #types • Type-token ratio: #types #tokens • Entropy: −P i piln(pi) • Simpson’s index (Simpson, 1949): P i pi2 • Quadratic entropy (Rao, 1982):1 P i,j dijpipj SIMPLICITY. Spitkovsky et al. (2010) have validated the utility of syntactic simplicity in curriculum learning for unsupervised grammar induction by showing that training on sentences in order of increasing lengths outperformed other orderings. We explore the simplicity hypothesis, albeit without prior assumptions on specific ordering of data, and extend it to additional simplicity/complexity measures of training data. Our features are inspired by prior research in second language acquisition, text simplification, and readability assessment (Schwarm and Ostendorf, 2005; Heilman et al., 2007; Pitler and Nenkova, 2008; Vajjala and 1Intuitively, this feature promotes paragraphs that contain semantically similar high-probability words. 132 Meurers, 2012). We use an off-the-shelf syntactic parser2 (Zhang and Clark, 2011) to parse our training corpus. Then, the following features are used to measure phonological, lexical, and syntactic complexity of training paragraphs: • Language model score • Character language model score • Average sentence length • Verb-token ratio • Noun-token ratio • Parse tree depth • Number of noun phrases: #NPs • Number of verb phrases: #V Bs • Number of prepositional phrases: #PPs PROTOTYPICALITY. This is a group of semantic features that use insights from cognitive linguistics and child language acquisition. The goal is to characterize the curriculum of representation learning in terms of the curriculum of human language learning. We resort to the Prototype theory (Rosch, 1978), which posits that semantic categories include more central (or prototypical) as well as less prototypical words. For example, in the ANIMAL category, dog is more prototypical than sloth (because dog is more frequent); dog is more prototypical than canine (because dog is more concrete); and dog is more prototypical than bull terrier (because dog is less specific). According to the theory, more prototypical words are acquired earlier. We use lexical semantic databases to operationalize insights from the prototype theory in the following semantic features; the features are computed on token level and averaged over paragraphs: • Age of acquisition (AoA) of words was extracted from the crowd-sourced database, containing over 50 thousand English words (Kuperman et al., 2012). For example, the AoA of run is 4.47 (years), of flee is 8.33, and of abscond is 13.36. If a word was not found in the database it was assigned the maximal age of 25. • Concreteness ratings on the scale of 1–5 (1 is most abstract) for 40 thousand English lemmas (Brysbaert et al., 2014). For example, cookie is rated as 5, and spirituality as 1.07. 2http://http://people.sutd.edu.sg/ ~yue_zhang/doc • Imageability ratings are taken from the MRC psycholinguistic database (Wilson, 1988). Following Tsvetkov et al. (2014), we used the MRC annotations as seed, and propagated the ratings to all vocabulary words using the word embeddings as features in an ℓ2-regularized logistic regression classifier. • Conventionalization features count the number of “conventional” words and phrases in a paragraph. Assuming that a Wikipedia title is a proxy to a conventionalized concept, we counted the number of existing titles (from a database of over 4.5 million titles) in the paragraph. • Number of syllables scores are also extracted from the AoA database; out-of-database words were annotated as 5-syllable words. • Relative frequency in a supersense was computed by marginalizing the word frequencies in the training corpus over coarse semantic categories defined in the WordNet (Fellbaum, 1998; Ciaramita and Altun, 2006). There are 41 supersense types: 26 for nouns and 15 for verbs, e.g., NOUN.ANIMAL and VERB.MOTION. For example, in NOUN.ANIMAL the relative frequency of human is 0.06, of dog is 0.01, of bird is 0.01, of cattle is 0.009, and of bumblebee is 0.0002. • Relative frequency in a synset was calculated similarly to the previous feature category, but word frequencies were marginalized over WordNet synsets (more fine-grained synonym sets). For example, in the synset {vet, warhorse, veteran, oldtimer, seasoned stager}, veteran is the most prototypical word, scoring 0.87. 3 Evaluation Benchmarks We evaluate the utility of the pretrained word embeddings as features in downstream NLP tasks. We choose the following off-the-shelf models that utilize pretrained word embeddings as features: Sentiment Analysis (Senti). Socher et al. (2013) created a treebank of sentences annotated with fine-grained sentiment labels on phrases and sentences from movie review excerpts. The coarse-grained treebank of positive and negative classes has been split into training, development, and test datasets containing 6,920, 872, and 1,821 sentences, respectively. We use the average of the word vectors of a given sentence as a feature vector for classification (Faruqui et al., 2015; Sedoc 133 et al., 2016). The ℓ2-regularized logistic regression classifier is tuned on the development set and accuracy is reported on the test set. Named Entity Recognition (NER). Named entity recognition is the task of identifying proper names in a sentence, such as names of persons, locations etc. We use the recently proposed LSTMCRF NER model (Lample et al., 2016) which trains a forward-backward LSTM on a given sequence of words (represented as word vectors), the hidden units of which are then used as (the only) features in a CRF model (Lafferty et al., 2001) to predict the output label sequence. We use the CoNLL 2003 English NER dataset (Tjong Kim Sang and De Meulder, 2003) to train our models and present results on the test set. Part of Speech Tagging (POS). For POS tagging, we again use the LSTM-CRF model (Lample et al., 2016), but instead of predicting the named entity tag for every word in a sentence, we train the tagger to predict the POS tag of the word. The tagger is trained and evaluated with the standard Penn TreeBank (PTB) (Marcus et al., 1993) training, development and test set splits as described in Collins (2002). Dependency Parsing (Parse). Dependency parsing is the task of identifying syntactic relations between the words of a sentence. For dependency parsing, we train the stack-LSTM parser of Dyer et al. (2015) for English on the universal dependencies v1.1 treebank (Agi´c et al., 2015) with the standard development and test splits, reporting unlabeled attachment scores (UAS) on the test data. We remove all part-of-speech and morphology features from the data, and prevent the model from optimizing the word embeddings used to represent each word in the corpus, thereby forcing the parser to rely completely on the pretrained embeddings. 4 Experiments Data. All models were trained on Wikipedia articles, split to paragraph-per-line. Texts were cleaned, tokenized, numbers were normalized by replacing each digit with “DG”, all types that occur less than 10 times were replaces by the “UNK” token, the data was not lowercased. We list data sizes in table 1. # paragraphs # tokens # types 2,532,361 100,872,713 156,663 Table 1: Training data sizes. Setup. 100-dimensional word embeddings were trained using the cbow model implemented in the word2vec toolkit (Mikolov et al., 2013).3 All training data was used, either shuffled or ordered by a curriculum. As described in §3, we modified the extrinsic tasks to learn solely from word embeddings, without additional features. All models were learned under same conditions, across curricula: in Parse, NER, and POS we limited the number of training iterations to 3, 3, and 1, respectively. This setup allowed us to evaluate the effect of curriculum without additional interacting factors. Experiments. In all the experiments we first train word embedding models, then the word embeddings are used as features in four extrinsic tasks (§3). We tune the tasks on development data, and report results on the test data. The only component that varies across the experiments is order of paragraphs in the training corpus—the curriculum. We compare the following experimental setups: • Shuffled baselines: the curriculum is defined by random shuffling the training data. We shuffled the data 10 times, and trained 10 word embeddings models, each model was then evaluated on downstream tasks. Following Bengio et al. (2009), we report test results for the system that is closest to the median in dev scores. To evaluate variability and a range of scores that can be obtained from shuffling the data, we also report test results for systems that obtained the highest dev scores. • Sorted baselines: the curriculum is defined by sorting the training data by sentence length in increasing/decreasing order, similarly to (Spitkovsky et al., 2010). • Coherent baselines: the curriculum is defined by just concatenating Wikipedia articles. The goal of this experiment is to evaluate the importance of semantic coherence in training data. 3To evaluate the impact of curriculum learning, we enforced sequential processing of data organized in a predefined order of training examples. To control for sequential processing, word embedding were learned by running the cbow using a single thread for one iteration. 134 Our intuition is that a coherent curriculum can improve models, since words with similar meanings and similar contexts are grouped when presented to the learner. • Optimized curriculum models: the curriculum is optimized using the BayesOpt. We evaluate and compare models optimized using features from one of the three feature groups (§2.2). As in the shuffled baselines, we fix the number of trials (here, BayesOpt iterations) to 10, and we report test results of systems that obtained best dev scores. Results. Experimental results are listed in table 2. Most systems trained with curriculum substantially outperform the strongest of all baselines. These results are encouraging, given that all word embedding models were trained on the same set of examples, only in different order, and display the indirect influence of the data curriculum on downstream tasks. These results support our assumption that curriculum matters. Albeit not as pronounced as with optimized curriculum, sorting paragraphs by length can also lead to substantial improvements over random baselines, but there is no clear recipe on whether the models prefer curricula sorted in an increasing or decreasing order. These results also support the advantage of a taskspecific optimization framework over a general, intuition-guided recipe. An interesting result, also, that shuffling is not essential: systems trained on coherent data are on par (or better) than the shuffled systems.4 In the next section, we analyze these results qualitatively. 5 Analysis What are task-specific curriculum preferences? We manually inspect learned features and curriculum-sorted corpora, and find that best systems are obtained when their embeddings are 4Note that in the shuffled NER baselines, best dev results yield lower performance on the test data. This implies that in the standard development/test splits the development and test sets are not fully compatible or not large enough. We also observe this problem in the curriculum-optimized Parse-prototypicality and Senti-diversity systems. The dev scores for the Parse systems are 76.99, 76.47, 76.47 for diversity, prototypicality, and simplicity, respectively, but the prototypicality-sorted parser performs poorly on test data. Similarly in the sentiment analysis task, the dev scores are 69.15, 69.04, 69.49 for diversity, prototypicality, and simplicity feature groups. Senti-diversity scores, however, are lower on the test data, although the dev results are better than in Senti-simplicity. This limitation of the standard dev/test splits is beyond the scope of this paper. learned from curricula appropriate to the downstream tasks. We discuss below several examples. POS and Parse systems converge to the same set of weights, when trained on features that provide various measures of syntactic simplicity. The features with highest coefficients (and thus the most important features in sorting) are #NPs, Parse tree depth, #V Ps, and #PPs (in this order). The sign in the #NPs feature weight, however, is the opposite from the other three feature weights (i.e., sorted in different order). #NPs is sorted in the increasing order of the number of noun phrases in a paragraph, and the other features are sorted in the decreasing order. Since Wikipedia corpus contains a lot of partial phrases (titles and headings), such curriculum promotes more complex, full sentences, and demotes partial sentences. Best Senti system is sorted by prototypicality features. Most important features (with the highest coefficients) are Concreteness, Relative frequency in a supersense, and the Number of syllables. First two are sorted in decreasing order (i.e. paragraphs are sorted from more to less concrete, and from more to less prototypical words), and the Number of syllables is sorted in increasing order (this also promotes simpler, shorter words which are more prototypical). We hypothesize that this soring reflects the type of data that Sentiment analysis task is trained on: it is trained on movie reviews, that are usually written in a simple, colloquial language. Unlike POS, Parse, and Senti systems, all NER systems prefer curricula in which texts are sorted from short to long paragraphs. The most important features in the best (simplicity-sorted) system are #PPs and Verb-token ratio, both sorted from less to more occurrences of prepositional and verb phrases. Interestingly, most of the top lines in the NER system curricula contain named entities, although none of our features mark named entities explicitly. We show top lines in the simplicityoptimized system in figure 2. Finally, in all systems sorted by prototypicality, the last line is indeed not a prototypical word Donaudampfschiffahrtselektrizitätenhauptbetriebswerkbauunterbeamtengesellschaft, which is an actual word in German, frequently used as an example of compounding in synthetic languages, but rarely (or never?) used by German speakers. Weighting examples according to curriculum. Another way to integrate curriculum in word em135 Senti NER POS Parse Shuffled median 66.01 85.88 96.35 75.08 best 66.61 85.50 96.38 76.40 Sorted long→short 66.78 85.22 96.47 75.85 short→long 66.12 85.49 96.20 75.31 Coherent original order 66.23 85.99 96.47 76.08 Optimized curriculum diversity 66.06 86.09 96.59 76.63 prototypicality 67.44 85.96 96.53 75.81 simplicity 67.11 86.42 96.62 76.54 Table 2: Evaluation of the impact of the curriculum of word embeddings on the downstream tasks. Trimingham “ Golf ” ball . Adélie penguin “ Atriplex ” leaf UNK UNK Hồng Lĩnh mountain Anneli Jäätteenmäki UNK cabinet Gävle goat Early telescope observations . Scioptric ball Matryoshka doll Luxembourgian passport Australian Cashmere goat Plumbeous water redstart Dagebüll lighthouse Vecom FollowUs . tv Syracuse Junction railroad . San Clemente Island goat Tychonoff plank Figure 2: Most of the top lines in best-scoring NER system contain named entities, although our features do not annotate named entities explicitly. bedding training is to weight training examples according to curriculum during word representation training. We modify the cbow objective PT t=1 log p(wt|wt−c..wt+c) as follows:5 T X t=1 ( 1 1 + e−weight(wt) + λ) log p(wt|wt−c..wt+c) Here, weight(wt) denotes the score attributed to the token wt, which is the z-normalized score of the paragraph; λ=0.5 is determined empirically. log p(wt)|wt−c..wt+c) computes the probability of predicting word wt, using the context of c words to the left and right of wt. Notice that this quantity is no longer a proper probability, as we are not 5The modified word2vec tool is located at https:// github.com/wlin12/wang2vec . normalizing over the weights weight(wt) over all tokens. However, the optimization in word2vec is performed using stochastic gradient descent, optimizing for a single token at each iteration. This yields a normalizer of 1 for each iteration, yielding the same gradient as the original cbow model. We retrain our best curriculum-sorted systems with the modified objective, also controlling for curriculum. The results are shown in table 3. We find that the benefit of integrating curriculum in training objective of word representations is not evident across tasks: Senti and NER systems trained on vectors with the modified objective substantially outperform best results in table 2; POS and Parse perform better than the baselines but worse than the systems with the original objective. Senti NER POS Parse curriculum 67.44 86.42 96.62 76.63 cbow+curric 68.26 86.49 96.48 76.54 Table 3: Evaluation of the impact of curriculum integrated in the cbow objective. Are we learning task-specific curricula? One way to assess whether we learn meaningful taskspecific curriculum preferences is to compare curricula learned by one downstream task across different feature groups. If learned curricula are similar in, say, NER system, despite being optimized once using diversity features and once using prototypicality features—two disjoint feature sets—we can infer that the NER task prefers word embeddings learned from examples presented in a certain order, regardless of specific optimization features. For each downstream task, we thus measure Spearman’s rank correlation between the curricula optimized using diversity (D), or prototypicality (P), or simplicity (S) feature sets. Prior to measuring correlations, we remove duplicate lines from 136 the training corpora. Correlation results across tasks and across feature sets are shown in table 4. The general pattern of results is that if two systems score higher than baselines, training sentences of their feature embeddings have similar curricula (i.e., the Spearman’s ρ is positive), and if two systems disagree (one is above and one is below the baseline), then their curricula also disagree (i.e., the Spearman’s ρ is negative or close to zero). NER systems all outperform the baselines and their curricula have high correlations. Moreover, NER sorted by diversity and simplicity have better scores than NER sorted by prototypicality, and in line with these results ρ(S,D)NER > ρ(P,S)NER and ρ(S,D)NER > ρ(D,P)NER. Similar pattern of results is in POS correlations. In Parse systems, also, diversity and simplicity features yielded best parsing results, and ρ(S,D)Parse has high positive correlation. The prototypicality-optimized parser performed poorly, and its correlations with better systems are negative. The best parser was trained using the diversity-optimized curriculum, and thus ρ(D,P)Parse is the lowest. Senti results follow similar pattern of curricula correlations. Senti NER POS Parse ρ(D, P) -0.68 0.76 0.66 -0.76 ρ(P, S) 0.33 0.75 0.75 -0.45 ρ(S, D) -0.16 0.81 0.51 0.67 Table 4: Curricula correlations across feature groups. Curriculum learning vs. data selection. We compare the task of curriculum learning to the task of data selection (reducing the set of training instances to more important or cleaner examples). We reduce the training data to the subset of 10% of tokens, and train downstream tasks on the reduced training sets. We compare system performance trained using the top 10% of tokens in the best curriculum-sorted systems (Senti-prototypicality, NER-implicity, POS-simplicity, Parse-diversity) to the systems trained using the top 10% of tokens in a corpus with randomly shuffled paragraphs.6 The results are listed in table 5. The curriculum-based systems are better in POS 6Top n% tokens are used rather than top n% paragraphs because in all tasks except NER curriculum-sorted corpora begin with longer paragraphs. Thus, with top n% paragraphs our systems would have an advantage over random systems due to larger vocabulary sizes and not necessarily due to a better subset of data. Senti NER POS Parse random 63.97 82.35 96.22 69.11 curriculum 64.47 76.96 96.55 72.93 Table 5: Data selection results. and in Parse systems, mainly because these tasks prefer vectors trained on curricula that promote well-formed sentences (as discussed above). Conversely, NER prefers vectors trained on corpora that begin with named entities, so most of the tokens in the reduced training data are constituents in short noun phrases. These results suggest that the tasks of data selection and curriculum learning are different. Curriculum is about strong initialization of the models and time-course learning, which is not necessarily sufficient for data reduction. 6 Related Work Two prior studies on curriculum learning in NLP are discussed in the paper (Bengio et al., 2009; Spitkovsky et al., 2010). Curriculum learning and related research on self-paced learning has been explored more deeply in computer vision (Bengio et al., 2009; Kumar et al., 2010; Lee and Grauman, 2011) and in multimedia analysis (Jiang et al., 2015). Bayesian optimization has also received little attention in NLP. GPs were used in the task of machine translation quality estimation (Cohn and Specia, 2013) and in temporal analysis of social media texts (Preotiuc-Pietro and Cohn, 2013); TPEs were used by Yogatama et al. (2015) for optimizing choices of feature representations—ngram size, regularization choice, etc.—in supervised classifiers. 7 Conclusion We used Bayesian optimization to optimize curricula for training dense distributed word representations, which, in turn, were used as the sole features in NLP tasks. Our experiments confirmed that better curricula yield stronger models. We also conducted an extensive analysis, which sheds better light on understanding of text properties that are beneficial for model initialization. The proposed novel technique for finding an optimal curriculum is general, and can be used with other datasets and models. 137 Acknowledgments This work was supported by the National Science Foundation through award IIS-1526745. We are grateful to Nathan Schneider, Guillaume Lample, Waleed Ammar, Austin Matthews, and the anonymous reviewers for their insightful comments. References Željko Agi´c, Maria Jesus Aranzabe, Aitziber Atutxa, Cristina Bosco, Jinho Choi, Marie-Catherine de Marneffe, Timothy Dozat, Richárd Farkas, Jennifer Foster, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Jan Hajiˇc, Anders Trærup Johannsen, Jenna Kanerva, Juha Kuokkala, Veronika Laippala, Alessandro Lenci, Krister Lindén, Nikola Ljubeši´c, Teresa Lynn, Christopher Manning, Héctor Alonso Martínez, Ryan McDonald, Anna Missilä, Simonetta Montemagni, Joakim Nivre, Hanna Nurmi, Petya Osenova, Slav Petrov, Jussi Piitulainen, Barbara Plank, Prokopis Prokopidis, Sampo Pyysalo, Wolfgang Seeker, Mojgan Seraji, Natalia Silveira, Maria Simi, Kiril Simov, Aaron Smith, Reut Tsarfaty, Veronika Vincze, and Daniel Zeman. 2015. Universal dependencies 1.1. LINDAT/CLARIN digital library at Institute of Formal and Applied Linguistics, Charles University in Prague. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proc. ICML, pages 41–48. James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Proc. NIPS, pages 2546–2554. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904–911. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In Proc. EMNLP, pages 594–602. Trevor Cohn and Lucia Specia. 2013. Modelling annotator bias with multi-task Gaussian processes: An application to machine translation quality estimation. In Proc. ACL, pages 32–42. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP, pages 1–8. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proc. ACL. Jeffrey L Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71–99. Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proc. NAACL. Christiane Fellbaum, editor. 1998. WordNet: an electronic lexical database. MIT Press. Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In Proc. EMNLP, pages 1100–1111. Michael J. Heilman, Kevyn Collins-Thompson, Jamie Callan, and Maxine Eskenazi. 2007. Combining lexical and grammatical features to improve readability measures for first and second language texts. In Proc. NAACL, pages 460–467. Philipp Hennig and Christian J Schuler. 2012. Entropy search for information-efficient global optimization. The Journal of Machine Learning Research, 13(1):1809–1837. Matthew D Hoffman, Eric Brochu, and Nando de Freitas. 2011. Portfolio allocation for Bayesian optimization. In Proc. UAI, pages 327–336. Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, and Alexander Hauptmann. 2014. Self-paced learning with diversity. In Proc. NIPS, pages 2078–2086. Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. 2015. Self-paced curriculum learning. In Proc. AAAI, volume 2, page 6. Donald R Jones. 2001. A taxonomy of global optimization methods based on response surfaces. Journal of global optimization, 21(4):345–383. Robert Kail. 1990. The development of memory in children. W. H. Freeman and Company, 3rd edition. M. P. Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In Proc. NIPS, pages 1189–1197. Victor Kuperman, Hans Stadthagen-Gonzalez, and Marc Brysbaert. 2012. Age-of-acquisition ratings for 30,000 english words. Behavior Research Methods, 44(4):978–990. Harold J Kushner. 1964. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering, 86(1):97–106. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. pages 282–289. 138 Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proc. NAACL. Yong Jae Lee and Kristen Grauman. 2011. Learning the easy things first: Self-paced visual category discovery. In Proc. CVPR, pages 1721–1728. Anne E Magurran. 2013. Measuring biological diversity. John Wiley & Sons. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proc. ICLR. Jonas Moˇckus, Vytautas Tiesis, and Antanas Žilinskas. 1978. On Bayesian methods for seeking the extremum. Towards global optimization, 2(117129):2. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proc. EMNLP, pages 186–195. Daniel Preotiuc-Pietro and Trevor Cohn. 2013. A temporal model of text periodicities using gaussian processes. In Proc. EMNLP, pages 977–988. C Radhakrishna Rao. 1982. Diversity and dissimilarity coefficients: a unified approach. Theoretical population biology, 21(1):24–43. Carl Edward Rasmussen. 2006. Gaussian Processes for machine learning. Eleanor Rosch. 1978. Principles of categorization. In Eleanor Rosch and Barbara B. Lloyd, editors, Cognition and categorization, pages 28–71. Michael L Rosenzweig. 1995. Species diversity in space and time. Cambridge University Press. Sarah E. Schwarm and Mari Ostendorf. 2005. Reading level assessment using support vector machines and statistical language models. In Proc. ACL, pages 523–530. João Sedoc, Jean Gallier, Lyle Ungar, and Dean Foster. 2016. Semantic word clusters using signed normalized graph cuts. arXiv preprint arXiv:1601.05403. Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando de Freitas. 2016. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE, 104(1):148–175. Edward H Simpson. 1949. Measurement of diversity. Nature. Burrhus Frederic Skinner. 1938. The behavior of organisms: an experimental analysis. An Experimental Analysis. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical Bayesian optimization of machine learning algorithms. In Proc. NIPS, pages 2951– 2959. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. EMNLP. Valentin I Spitkovsky, Hiyan Alshawi, and Dan Jurafsky. 2010. From baby steps to leapfrog: How less is more in unsupervised dependency parsing. In Proc. NAACL, pages 751–759. Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. 2010. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proc. ICML, pages 1015–1022. Andy Stirling. 2007. A general framework for analysing diversity in science, technology and society. Journal of the Royal Society Interface, 4(15):707–719. E Ke Tang, Ponnuthurai N Suganthan, and Xin Yao. 2006. An analysis of diversity measures. Machine Learning, 65(1):247–271. William R Thompson. 1933. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. CoNLL. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proc. ACL. Sowmya Vajjala and Detmar Meurers. 2012. On improving the accuracy of readability classification using insights from second language acquisition. In Proc. BEA, pages 163–173. Michael Wilson. 1988. MRC psycholinguistic database: Machine-usable dictionary, version 2.00. Behavior Research Methods, Instruments, & Computers, 20(1):6–10. Dani Yogatama, Lingpeng Kong, and Noah A Smith. 2015. Bayesian optimization of text representations. In Proc. EMNLP, pages 2100–2105. Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105–151. 139
2016
13
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1372–1381, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Continuous Space Rule Selection Model for Syntax-based Statistical Machine Translation Jingyi Zhang1,2, Masao Utiyama1, Eiichro Sumita1 Graham Neubig2, Satoshi Nakamura2 1National Institute of Information and Communications Technology, 3-5Hikaridai, Keihanna Science City, Kyoto 619-0289, Japan 2Graduate School of Information Science, Nara Institute of Science and Technology, Takayama, Ikoma, Nara 630-0192, Japan jingyizhang/mutiyama/[email protected] neubig/[email protected] Abstract One of the major challenges for statistical machine translation (SMT) is to choose the appropriate translation rules based on the sentence context. This paper proposes a continuous space rule selection (CSRS) model for syntax-based SMT to perform this context-dependent rule selection. In contrast to existing maximum entropy based rule selection (MERS) models, which use discrete representations of words as features, the CSRS model is learned by a feed-forward neural network and uses real-valued vector representations of words, allowing for better generalization. In addition, we propose a method to train the rule selection models only on minimal rules, which are more frequent and have richer training data compared to non-minimal rules. We tested our model on different translation tasks and the CSRS model outperformed a baseline without rule selection and the previous MERS model by up to 2.2 and 1.1 points of BLEU score respectively. 1 Introduction In syntax-based statistical machine translation (SMT), especially tree-to-string (Liu et al., 2006; Graehl and Knight, 2004) and forest-to-string (Mi et al., 2008) SMT, a source tree or forest is used as input and translated by a series of tree-based translation rules into a target sentence. A tree-based translation rule can perform reordering and translation jointly by projecting a source subtree into a target string, which can contain both terminals and nonterminals. One of the difficulties in applying this model is the ambiguity existing in translation rules: a S NP a thief caught PRP VBD DT NN NP VP I 了一个 抓 我 贼 S NP a cold caught PRP VBD DT NN NP VP I 了感冒 得 我 S NP caught VBD NP VP x0 了 x1 抓 x0 x1 S NP caught VBD NP VP x0 了 x1 得 x0 x1 Rule Extraction Figure 1: An ambiguous source subtree with different translations (English-to-Chinese). source subtree can have different target translations extracted from the parallel corpus as shown in Figure 1. Selecting correct rules during decoding is a major challenge for SMT in general, and syntax-based models are no exception. There have been several methods proposed to resolve this ambiguity. The most simple method, used in the first models of tree-to-string translation (Liu et al., 2006), estimated the probability of a translation rule by relative frequencies. For example, in Figure 1, the rule that occurs more times in the training data will have a higher score. Later, Liu et al. (2008) proposed a maximum entropy based rule selection (MERS, Section 2) model for syntax-based SMT, which used contextual information for rule selection, such as words surrounding a rule and words covered by nonterminals in a rule. For example, to choose the correct rule from the two rules in Figure 1 for decoding a particular input sentence, if the source phrase covered by “x1” is “a thief” and this child phrase 1372 has been seen in the training data, then the MERS model can use this information to determine that the first rule should be applied. However, if the source phrase covered by “x1” is a slightly different phrase, such as “a gunman”, it will be hard for the MERS model to select the correct rule, because it treats “thief” and “gunman” as two different and unrelated words. In this paper, we propose a continuous space rule selection (CSRS, Section 3) model, which is learned by a feed-forward neural network and replaces the discrete representations of words used in the MERS model with real-valued vector representations of words for better generalization. For example, the CSRS model can use the similarity of word representations for “gunman” and “thief” to infer that “a gunman” is more similar with “a thief” than “a cold”. In addition, we propose a new method, applicable to both the MERS and CSRS models, to train rule selection models only on minimal rules. These minimal rules are more frequent and have richer training data compared to non-minimal rules, making it possible to further relieve the data sparsity problem. In experiments (Section 4), we validate the proposed CSRS model and the minimal rule training method on English-to-German, Englishto-French, English-to-Chinese and English-toJapanese translation tasks. 2 Tree-to-String SMT and MERS 2.1 Tree-to-String SMT In tree-to-string SMT (Liu et al., 2006), a parse tree for the source sentence F is transformed into a target sentence E using translation rules R. Each tree-based translation rule r ∈R translates a source subtree ˜t into a target string ˜e, which can contain both terminals and nonterminals. During decoding, the translation system examines different derivations for each source sentence and outputs the one with the highest probability, ˆE = arg max E,R Pr (E, R|F) . (1) For a translation E of a source sentence F with derivation R, the translation probability is calculated as follows, Pr (E, R|F) ≈ exp  K P k=1 λkhk (E, R, F)  P E′,R′ exp  K P k=1 λkhk (E′, R′, F) . (2) Here, hk are features used in the translation system and λk are feature weights. Features used in Liu et al. (2006)’s model contain a language model and simple features based on relative frequencies, which do not consider context information. One of the most important features used in this model is based on the log conditional probability of the target string given the input source subtree log Pr ˜e|˜t . This allows the model to determine which target strings are more likely to be used in translation. However, as the correct translation of the rules may depend on context that is not directly included in the rule, this simple contextindependent estimate is inherently inaccurate. 2.2 Maximum Entropy Based Rule Selection To perform context-dependent rule selection, Liu et al. (2008) proposed the MERS model for syntax-based SMT. They built a maximum entropy classifier for each ambiguous source subtree ˜t, which introduced contextual information C and estimated the conditional probability using a loglinear model as shown below, Pr ˜e|˜t, C = exp  K P k=1 λkhk (˜e, C)  P ˜e′ exp  K P k=1 λkhk (˜e′, C) . (3) The target strings ˜e are treated as different classes for the classifier. Supposing that, • r covers source span [fϕ, fϑ] and target span [eγ, eσ], • ˜t contains K nonterminals {Xk|0 ≤k ≤K −1}, • Xk covers source span [fϕk, fϑk] and target span [eγk, eσk], the MERS model used 5 kinds of source-side features as follows, 1. Lexical features: words around a rule (e.g. fϕ−1) and words covered by nonterminals in a rule (e.g. fϕ0). 1373 2. Part-of-speech features: part-of-speech (POS) of context words that are used as lexical features. 3. Span features: span lengths of source phrases covered by nonterminals in r. 4. Parent features: the parent node of ˜t in the parse tree of the source sentence. 5. Sibling features: the siblings of the root of ˜t. Note that the MERS model does not use features of the source subtree ˜t, because the source subtree ˜t is fixed for each classifier. The MERS model was integrated into the translation system as two additional features in Equation 2. Supposing that the derivation R contains M rules r1, ..., rM with ambiguous source subtrees, then these two MERS features are as follows, h1 (E, R, F) = M P m=1 log Pr ˜em|˜tm, Cm  h2 (E, R, F) = M, (4) where ˜tm and ˜em are the source subtree and the target string contained in rm, and Cm is the context of rm. h1 is the MERS probability feature, and, h2 is a penalty feature counting the number of predictions made by the MERS model. 3 Our CSRS Approach 3.1 Modeling The proposed CSRS model differs from the MERS model in three ways. 1. Instead of learning a single classifier for each source subtree ˜t, it learns a single classifier for all rules. 2. Instead of hand-crafted features, it uses a feed-forward neural network to induce features from context words. 3. Instead of one-hot representations, it uses distributed representations to exploit similarities between words. First, with regard to training, our CSRS model follows Zhang et al. (2015) in approximating the posterior probability by a binary classifier as follows, Pr ˜e|˜t, C ≈Pr v = 1|˜e, ˜t, C , (5) where v ∈{0, 1} is an indicator of whether ˜t is translated into ˜e. This is in contrast to the MERS model, which treated the rule selection problem as a multi-class classification task. If instead we attempted to estimate output probabilities for all different ˜e, the cost of estimating the normalization coefficient would be prohibitive, as the number of unique output-side word strings ˜e is large. There are a number of remedies to this, including noise contrastive estimation (Vaswani et al., 2013), but the binary approximation method has been reported to have better performance (Zhang et al., 2015). To learn this model, we use a feed-forward neural network with structure similar to neural network language models (Vaswani et al., 2013). The input of the neural rule selection model is a vector representation for ˜t, another vector representation for ˜e, and a set of ξ vector representations for both source-side and target-side context words of r: C(r) = w1, ..., wξ (6) In our model, C (r) is calculated differently depending on the number of nonterminals included in the rule. Specifically, Equation 7 defines Cout (r, n) to be context words (n-grams) around r and Cin (r, n, Xk) to be boundary words (n-grams) covered by nonterminal Xk in r.1 Cout (r, n) = fϕ−1 ϕ−n, fθ+n θ+1 , eγ−1 γ−n, eσ+n σ+1 Cin (r, n, Xk) = fϕk+n−1 ϕk , fθk θk−(n−1), eγk+n−1 γk , eσk σk−(n−1) (7) The context words used for a translation rule r with K nonterminals are shown as below. K C (r) = 0 Cout (r, 6) = 1 Cout (r, 4) , Cin (r, 2, X0) > 1 Cout (r, 2) , Cin (r, 2, X0) , Cin (r, 2, X1) We can see that rules with different numbers of nonterminals K use different context words.2 1Note that when extracting Cout, we use “⟨s⟩” and “⟨/s⟩” for context words that exceed the length of the sentence; When extracting Cin, we use “⟨non⟩” for context words that exceed the length of the nonterminal. Words that occur less than twice in the training data are replaced by “⟨unk⟩”. 2In most cases, restrictions on extracted rules will ensure that rules will only contain two nonterminals. However, when using minimal rules as described in the next section, more than two nonterminals are possible, and in these cases, only contextual information covered by the first two nonterminals is used in the input. These cases are sufficiently rare, however, that we chose to consider only the first two. 1374 S The blue rug on the floor of your apartment is really cute DT JJ NN IN DT NN IN PRP NN VBZ JJ RB NP NP NP ADJP VP PP NP PP NP 你 公寓 地板 铺 在 蓝色 毛毯 很 上 的 可爱 PP IN NP on x0 在 x0 上 r: : area covered by r : area covered by x0 in r Figure 2: Context word examples. The red words are contained in Cout (r, 4) and the blue words are contained in Cin (r, 2, X0). For example, if r does not contain nonterminals, then Cin is not used. Besides, we use more context words surrounding the rule (Cout (r, 6)) for rules with K = 0 than rules that contain nonterminals (Cout (r, 4) for K = 1 and Cout (r, 2) for K > 1). This is based on the intuition that rules with K = 0 can only use the context words surrounding the rule as information for rule selection, hence this information is more important than for other rules. Figure 2 gives an example of context words when applying the rule r to the example sentence. Note that we use target-side context because source-side context is not enough for selecting correct rules. Since it is not uncommon for one source sentence to have different correct translations, a translation rule used in one correct derivation may be incorrect for other derivations. In these cases, target-side context is useful for selecting appropriate translation rules. 3 The vector representations for ˜t, ˜e and C are obtained by using a projection matrix to project each one-hot input into a real-valued embedding vector. This projection is another key advantage over the MERS model. Because the CSRS model learns one unified model for all rules and can share all training data to learn better vector representations of words and rules, and the similarities between vectors can be used to generalize in cases such as the “thief/gunman” example in the introduction. 3It is also possible to consider target-side context in a framework like the MERS model, but we show in experiments that a linear model using the same features as the CSRS model did not improve accuracy. After calculating the projections, two hidden layers are used to combine all inputs. Finally, the neural network has two outputs Pr v = 1|˜e, ˜t, C  and Pr v = 0|˜e, ˜t, C . To train the CSRS model, we need both positive and negative training examples. Positive examples, ˜e, ˜t, C, 1 , can be extracted directly from the parallel corpus. For each positive example, we generate one negative example, ˜e′, ˜t, C, 0 . Here, ˜e′ is randomly generated according to the translation distribution (Zhang et al., 2015), Pr ˜e|˜t = Count ˜e, ˜t P ˜e′ Count ˜e′, ˜t, (8) where, Count ˜e, ˜t  is how many times ˜t is translated into ˜e in the parallel corpus. During translating, following the MERS model, the CSRS model only calculates probabilities for rules with ambiguous source subtrees. These predictions are converted into two CSRS features for the translation system similar to the two MERS features in Equation 4: one is the product of probabilities calculated by the CSRS model and the other one is a penalty feature that stands for how many rules with ambiguous source subtrees are contained in one translation. 3.2 Usage of Minimal Rules Despite the fact that the CSRS model can share information among instances using distributed word representations, it still poses an extremely sparse learning problem. Specifically, the numbers of unique subtrees ˜t and strings ˜e are extremely large, 1375 Source tree Target string PP IN DT NP on the NN table NN table 桌子 DT NP the 桌子 NN table DT NP the x0 NN x0 在桌子上 PP IN NP on x0 在 x0 上 PP IN DT NP on the NN x0 在 x0 上 Rule1 Rule2 Rule3 Rule4 Rule5 Rule6 PP IN DT NN NP on the table 在 桌子 上 extract Figure 3: Rules. and many may only appear a few times in the corpus. To reduce these problems of sparsity, we propose another improvement to the model, specifically through the use of minimal rules. Minimal rules (Galley et al., 2004) are translation rules that cannot be split into two smaller rules. For example, in Figure 3, Rule2 is not a minimal rule, since Rule2 can be split into Rule1 and Rule3. In the same way, Rule4 and Rule6 are not minimal while Rule1, Rule3 and Rule5 are minimal. Minimal rules are more frequent than nonminimal rules and have richer training data. Hence, we can expect that a rule selection model trained on minimal rules will suffer less from data sparsity problems. Besides, without non-minimal rules, the rule selection model will need less memory and can be trained faster. To take advantage of this fact, we train another version of the CSRS model (CSRS-MINI) over only minimal rules. The probability of a nonminimal rule is then calculated using the product of the probability of minimal rules contained therein. Note that for both the standard CSRS and CSRS-MINI models, we use the same baseline translation system which can use non-minimal translation rules. The CSRS-MINI model will break translation rules used in translations down into minimal rules and multiply all probabilities to calculate the necessary features. 4 Experiments 4.1 Setting We evaluated the proposed approach for Englishto-German (ED), English-to-French (EF), English-to-Chinese (EC) and English-to-Japanese (EJ) translation tasks. For the ED and EF tasks, the translation systems are trained on Europarl v7 parallel corpus and tested on the WMT 2015 translation task.4 The test sets for the WMT 2014 translation task were used as development sets in our experiments. For the EC and EJ tasks, we used datasets provided for the patent machine translation task at NTCIR-9 (Goto et al., 2011).5 The detailed statistics for training, development and test sets are given in Table 1. The word segmentation was done by BaseSeg (Zhao et al., 2006) for Chinese and Mecab6 for Japanese. For each translation task, we used Travatar (Neubig, 2013) to train a forest-to-string translation system. GIZA++ (Och and Ney, 2003) was used for word alignment. A 5-gram language model was trained on the target side of the training corpus using the IRST-LM Toolkit7 with modified Kneser-Ney smoothing. Rule extraction was 4The WMT tasks provided other training corpora. We used only the Europarl corpus, because training a large-scale system on the whole data set requires large amounts of time and computational resources. 5Note that NTCIR-9 only contained a Chinese-to-English translation task. Because we want to test the proposed approach with a similarly accurate parsing model across our tasks, we used English as the source language in our experiments. In NTCIR-9, the development and test sets were both provided for the CE task while only the test set was provided for the EJ task. Therefore, we used the sentences from the NTCIR-8 EJ and JE test sets as the development set in our experiments. 6http://sourceforge.net/projects/mecab/files/ 7http://hlt.fbk.eu/en/irstlm 1376 SOURCE TARGET ED TRAIN #Sents 1.90M #Words 52.2M 49.7M #Vocab 113K 376K DEV #Sents 3,003 #Words 67.6K 63.0K TEST #Sents 2,169 #Words 46.8K 44.0K EF TRAIN #Sents 1.99M #Words 54.4M 60.4M #Vocab 114K 137K DEV #Sents 3,003 #Words 71.1K 81.1K TEST #Sents 1.5K #Words 27.1K 29.8K EC TRAIN #Sents 954K #Words 40.4M 37.2M #Vocab 504K 288K DEV #Sents 2K #Words 77.5K 75.4K TEST #Sents 2K #Words 58.1K 55.5K EJ TRAIN #Sents 3.14M #Words 104M 118M #Vocab 273K 150K DEV #Sents 2K #Words 66.5K 74.6K TEST #Sents 2K #Words 70.6K 78.5K Table 1: Data sets. performed using the GHKM algorithm (Galley et al., 2006) and the maximum numbers of nonterminals and terminals contained in one rule were set to 2 and 10 respectively. Note that when extracting minimal rules, we release this limit. The decoding algorithm is the bottom-up forest-to-string decoding algorithm of Mi et al. (2008). For English parsing, we used Egret8, which is able to output packed forests for decoding. We trained the CSRS models (CSRS and CSRSMINI) on translation rules extracted from the training set. Translation rules extracted from the development set were used as validation data for model training to avoid over-fitting. For different training epochs, we resample negative examples for each positive example to make use of different negative examples. The embedding dimension was set to be 50 and the number of hidden nodes was 100. The initial learning rate was set to be 0.1. The learning rate was halved each time the validation likelihood decreased. The number of epoches was set to be 20. A model was saved after each epoch and the model with highest validation likelihood was used in the translation system. We implemented Liu et al. (2008)’s MERS model to compare with our approach. The train8https://code.google.com/archive/p/egret-parser ED EF EC EJ Base 15.00 26.76 29.42 37.10 MERS 15.62 27.33 29.75 37.76 CSRS 16.15 28.05 30.12 37.83 MERS-MINI 15.77 28.13 30.53 38.14 CSRS-MINI 16.49 28.30 31.63 38.32 Table 2: Translation results. The bold numbers stand for the best systems. ED EF EC EJ CSRS vs. MERS >> >> > − CSRS-MINI vs. MERS-MINI >> − >> − MERS-MINI vs. MERS − >> >> >> CSRS-MINI vs. CSRS > − >> >> Table 3: Significance test results. The symbol >> (>) represents a significant difference at the p < 0.01 (p < 0.05) level and the symbol - represents no significant difference at the p < 0.05 level. ing instances for their model were extracted from the training set. Following their work, the iteration number was set to be 100 and the Gaussian prior was set to be 1. We also compared the original MERS model and the MERS model trained only on minimal rules (MERS-MINI) to test the benefit of using minimal rules for model training. The MERS and CSRS models were both used to calculate features used to rerank unique 1,000best outputs of the baseline system. Tuning is performed to maximize BLEU score using minimum error rate training (Och, 2003). 4.2 Results Table 2 shows the translation results and Table 3 shows significance test results using bootstrap resampling (Koehn, 2004): “Base” stands for the baseline system without any; “MERS”, “CSRS”, “MERS-MINI” and “CSRS-MINI” means the outputs of the baseline system were reranked using features from the MERS, CSRS, MERS-MINI and CSRS-MINI models respectively. Generally, the CSRS model outperformed the MERS model and the CSRS-MINI model outperformed the MERSMINI model on different translation tasks. In addition, using minimal rules for model training benefitted both the MERS and CSRS models. Table 4 shows translation examples in the EC task to demonstrate the reason why our approach improved accuracy. Among all translations, TCSRS−MINI is basically the same as the reference with only a few paraphrases that do not alter the meaning of the sentence. In con1377 Source typical dynamic response rate of an optical gap sensor as described above is approximately 2 khz , or 0.5 milliseconds . Reference 上述(described above) 光学(optical) 间隙(gap) 传感器(sensor) 的典型(typical) 动态(dynamic) 响 应(response) 率(rate) 约(approximately) 为(is) 2KHz 或(or) 为0.5 毫秒(milliseconds) 。 TBase 典型(typical) 的动态(dynamic) 响应(response) 速率(rate) 间隙(gap) 传感器(sensor) 的 光学(optical) 如上(above) 描述(described) 的是(is) 约(approximately) 2 千赫(khz) 兹,或(or) 0.5 毫秒(milliseconds) 。 TMERS 典型(typical) 的动态(dynamic) 响应(response) 率(rate) 光学(optical) 传感器(sensor) ,如(as) 以上(above) 所述(described) 间隙(gap) 的约(approximately) 2 千赫(khz) ,或(or) 0.5 毫 秒(milliseconds) 。 TCSRS 光学(optical) 传感器(sensor) ,如(as) 以上(above) 所述(described) 间隙(gap) 的典型(typical) 的动态(dynamic) 响应(response) 速率(rate) 为(is) 约(approximately) 2 千赫(khz) 兹,或(or) 0.5 毫秒(milliseconds) 。 TMERS−MINI 典型(typical) 的 动态(dynamic) 响应(response) 率(rate) 间隙(gap) 传感器(sensor) 的 光学(optical) 如上(above) 描述(described) 的是(is) 约(approximately) 2 千赫(khz) 兹, 或(or) 0.5 毫秒(milliseconds) 。 TCSRS−MINI 如上(above) 描述(described) 的 光学(optical) 间隙(gap) 传感器(sensor) 典型(typical) 的动 态(dynamic) 响应(response) 速率(rate) 为(is) 约(approximately) 2 千赫(khz) 兹,或(or) 0.5 毫 秒(milliseconds) 。 Table 4: Translation examples. R1: TMERS&TCSRS PP ( IN ( “of” ) NP ( NP ( DT ( “an” ) NP’ ( JJ ( “optical” ) x0:NN ) ) x1:NP’ ) ) →“光学(optical)” x1 x0 “的” R2: TMERS−MINI PP ( IN ( “of” ) NP ( NP ( DT ( “an” ) NP’ ( JJ ( “optical” ) x0:NP’ ) ) x1:SBAR ) ) →x0 “的” “光学(optical)” x1 “的” R3 : TCSRS−MINI NP’ ( JJ ( “optical” ) x0:NP’ ) →“光学(optical)” x0 Table 5: Rules used to translate the source word “optical” in different translations. Shadows (R3) stand for ambiguous rules. trast, TBase, TMERS, TCSRS and TMERS−MINI all contain apparent mistakes. For example, the source phrase “optical gap sensor” (covered by gray shadows in Table 4) is wrongly translated in TBase, TMERS, TCSRS and TMERS−MINI due to incorrect reorderings. Table 5 shows rules used to translate the source word “optical” in different translations: R1 is used in TMERS and TCSRS; R2 is used in TMERS−MINI; R3 is used in TCSRS−MINI. Although the source word “optical” is translated to the correct translation “光学(optical)” in all translations, R1, R2 and R3 cause different reorderings for the source phrase “optical gap sensor”. R3 reorders this source phrase correctly while R1 and R2 cause wrong reorderings for this source phrase. We can see that R1 is umambiguous, so the MERS and CSRS models will give probability 1 to R1, which could make the MERS and CSRS models prefer TMERS and TCSRS. This is a typical translation error caused by sparse rules since the source subtree in R1 does not have other translations in the training corpus. To compare the MERS-MINI and CSRS-MINI models, Table 6 shows minimal rules (R2a, R2b, R3a and R3b) contained in R2 and R3. Table 7 shows probabilities of these minimal rules calculated by the MERS-MINI and CSRS-MINI models respectively. We can see that the CSRSMINI model gave higher scores for the correct translation rules R3a and R3b than the MERSMINI model, while the MERS-MINI model gave a higher score to the incorrect rule R2b than the CSRS-MINI model. Note that R2b and R3b are the same rule, but the target-side context in TMERS−MINI and TCSRS−MINI is different. The CSRS-MINI model will give R2b and R3b different scores because the CSRS-MINI model used target-side context. However, the MERS-MINI model only used source-side features and gave R2b and R3b the same score. The fact that the CSRS-MINI model 1378 R2a PP ( IN ( “of” ) NP ( NP ( DT ( “an” ) NP’ ( x0:JJ x1:NP’ ) ) x2:SBAR ) ) →x1 “的” x0 x2 “的” R2b JJ ( “optical” ) →“光学(optical)” R3a NP’ ( x0:JJ x1:NP’ ) →x0 x1 R3b JJ ( “optical” ) →“光学(optical)” Table 6: Minimal rules contained in R2 and R3. Shadows (R2b, R3a and R3b) stand for ambiguous rules. MERS-MINI CSRS-MINI R2a 1 1 R2b 0.5441 0.09632 R3a 0.9943 0.9987 R3b 0.5441 0.7317 Table 7: Scores of minimal rules. gave a higher score for R3b than R2b means that the CSRS-MINI model predicted the target string in R2b and R3b is a good translation in the context of TCSRS−MINI but not so good in the context of TMERS−MINI. As we can see, the target phrase “如上(above) 描述(described) 的(of) 光学(optical) 间隙(gap) 传感器(sensor)” around “光学(optical)” in TCSRS−MINI is a reasonable Chinese phrase while the target phrase “间隙(gap) 传感器(sensor) 的(of) 光学(optical) 如上(above) 描述(described) 的(of)” around “光学(optical)” in TMERS−MINI does not make sense. Namely, the CSRS model trained with target-side context can perform rule selection considering target sentence fluency, which is the reason why target-side context can help in the rule selection task. 4.3 Analysis To analyze the influence of different features, we trained the MERS model using source-side and target-side n-gram lexical features similar to the CSRS model. When using this feature set, the performance of the MERS model dropped significantly. This indicates that the syntactic, POS and span features used in the original MERS model are important for their model, since these features can generalize better. Purely lexical features are less effective due to sparsity problems when training one maximum entropy based classifier for each ambiguous source subtree and training data for each classifier is quite limited. In contrast, the CSRS model is trained in a continuous space and does not split training data, which relieves the sparsity problem of lexical features. As a result, the CSRS model achieved better performance using only lexical features compared to the MERS model. We also tried to use pre-trained word embedding features for the MERS model, but it did not improve the performance of the MERS model, which indicates that the log-linear model is not able to benefit from distributed representations as well as the neural network model. We also tried reranking with both the CSRS and MERS models added as features, but it did not achieve further improvement compared to only using the CSRS model. This indicates that although these two models use different type of features, the information contained in these features are similar. For example, the POS features used in the MERS model and the distributed representations used in the CSRS model are both used for better generalization. In addition, using both the CSRS and CSRSMINI models did not improve over using only the CSRS-MINI model in our experiments. There are two main differences between the CSRS and CSRS-MINI models. First, minimal rules are more frequent and have more training data than non-minimal rules, which is why the CSRS-MINI model is more robust than the CSRS model. Second, non-minimal rules contain more information than minimal rules. For example, in Figure 3, Rule4 contains more information than Rule1, which could be an advantage for rule selection. However, the information contained in Rule4 will be considered as context features for Rule1. Therefore, this is no longer an advantage for the CSRS model as long as we use rich enough context features, which could be the reason why using both the CSRS and CSRS-MINI models cannot further improve the translation quality compared to using only the CSRS-MINI model. 5 Related Work The rule selection problem for syntax-based SMT has received much attention. He et al. (2008) proposed a lexicalized rule selection model to perform context-sensitive rule selection for hierarchical phrase-base translation. Cui et al. (2010) introduced a joint rule selection model for hierarchical phrase-based translation, which also approximated the rule selection problem by a binary classification problem like our approach. However, these two models adopted linear classifiers similar to those used in the MERS model (Liu et al., 2008), which suffers more from the data sparsity 1379 problem compared to the CSRS model. There are also existing works that exploited neural networks to learn translation probabilities for translation rules used in the phrase-based translation model. Namely, these methods estimated translation probabilities for phrase pairs extracted from the parallel corpus. Schwenk (2012) proposed a continuous space translation model, which calculated the translation probability for each word in the target phrase and then multiplied the probabilities together as the translation probability of the phrase pair. Gao et al. (2014) and Zhang et al. (2014) proposed methods to learn continuous space phrase representations and use the similarity between the source and target phrases as translation probabilities for phrase pairs. All these three methods can only be used for the phrase-based translation model, not for syntaxbased translation models. There are also works that used minimal rules for modeling. Vaswani et al. (2011) proposed a rule Markov model using minimal rules for both training and decoding to achieve a slimmer model, a faster decoder and comparable performance with using non-minimal rules. Durrani et al. (2013) proposed a method to model with minimal translation units and decode with phrases for phrasebased SMT to improve translation performances. Both of these two methods do not use distributed representations as used in our model for better generalization. In addition, neural machine translation (NMT) has shown promising results recently (Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b). NMT uses a recurrent neural network to encode the whole source sentence and then produce the target words one by one. These models can be trained on parallel corpora and do not need word alignments to be learned in advance. There are also neural translation models that are trained on word-aligned parallel corpus (Devlin et al., 2014; Meng et al., 2015; Zhang et al., 2015; Setiawan et al., 2015), which use the alignment information to decide which parts of the source sentence are more important for predicting one particular target word. All these models are trained on plain source and target sentences without considering any syntactic information while our neural model learns rule selection for tree-based translation rules and makes use of the tree structure of natural language for better translation. There is also a new syntactic NMT model (Eriguchi et al., 2016), which extends the original sequence-to-sequence NMT model with the source-side phrase structure. Although this model takes source-side syntax into consideration, it still produces target words one by one as a sequence. In contrast, the tree-based translation rules used in our model can take advantage of the hierarchical structures of both source and target languages. 6 Conclusion In this paper, we propose a CSRS model for syntax-based SMT, which is learned by a feedforward neural network on a continuous space. Compared with the previous MERS model that used discrete representations of words as features, the CSRS model uses real-valued vector representations of words and can exploit similarity information between words for better generalization. In addition, we propose to use only minimal rules for rule selection to further relieve the data sparsity problem, since minimal rules are more frequent and have richer training data. In our experiments, the CSRS model outperformed the previous MERS model and the usage of minimal rules benefitted both CSRS and MERS models on different translation tasks. For future work, we will explore more sophisticated features for the CSRS model, such as syntactic dependency relationships and head words, since only simple lexical features are used in the current incarnation. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Lei Cui, Dongdong Zhang, Mu Li, Ming Zhou, and Tiejun Zhao. 2010. A joint rule selection model for hierarchical phrase-based translation. In Proc. ACL, pages 6–11. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proc. ACL, pages 1370–1380. Nadir Durrani, Alexander Fraser, and Helmut Schmid. 2013. Model with minimal translation units, but decode with phrases. In Proc. HLT-NAACL, pages 1– 11. 1380 Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. arXiv preprint arXiv:1603.06075. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proc. HLT-NAACL, pages 273–280. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. COLING-ACL, pages 961–968. Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In Proc. ACL, pages 699–709. Isao Goto, Bin Lu, Ka Po Chow, Eiichiro Sumita, and Benjamin K Tsou. 2011. Overview of the patent machine translation task at the NTCIR-9 workshop. In Proc. NTCIR-9, pages 559–578. Jonathan Graehl and Kevin Knight. 2004. Training tree transducers. In Proc. HLT-NAACL, pages 105– 112. Zhongjun He, Qun Liu, and Shouxun Lin. 2008. Improving statistical machine translation using lexicalized rule selection. In Proc. Coling, pages 321–328. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proc. ACL-IJCNLP, pages 1–10. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. EMNLP, pages 388–395. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proc. COLING-ACL, pages 609–616. Qun Liu, Zhongjun He, Yang Liu, and Shouxun Lin. 2008. Maximum entropy based rule selection model for syntax-based statistical machine translation. In Proc. EMNLP, pages 89–97. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attentionbased neural machine translation. In Proc. EMNLP, pages 1412–1421. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proc. ACL-IJCNLP, pages 11–19. Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. 2015. Encoding source language with convolutional neural network for machine translation. In Proc. ACLIJCNLP, pages 20–30. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proc. HLT-ACL, pages 192– 199. Graham Neubig. 2013. Travatar: A forest-to-string machine translation engine based on tree transducers. In Proc. ACL, pages 91–96. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19–51. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. ACL, pages 160–167. Holger Schwenk. 2012. Continuous space translation models for phrase-based statistical machine translation. In Proc. COLING, pages 1071–1080. Hendra Setiawan, Zhongqiang Huang, Jacob Devlin, Thomas Lamar, Rabih Zbib, Richard Schwartz, and John Makhoul. 2015. Statistical machine translation features with multitask tensor networks. In Proc. ACL-IJCNLP, pages 31–41. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Haitao Mi, Liang Huang, and David Chiang. 2011. Rule markov models for fast treeto-string translation. In Proc. HLT-ACL, pages 856– 864. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with large-scale neural language models improves translation. In Proc. EMNLP, pages 1387–1392. Jiajun Zhang, Shujie Liu, Mu Li, Ming Zhou, and Chengqing Zong. 2014. Bilingually-constrained phrase embeddings for machine translation. In Proc. ACL, pages 111–121. Jingyi Zhang, Masao Utiyama, Eiichiro Sumita, Graham Neubig, and Satoshi Nakamura. 2015. A binarized neural network joint model for machine translation. In Proc. EMNLP, pages 2094–2099. Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An improved chinese word segmentation system with conditional random field. In Proc. SIGHAN, pages 162–165. 1381
2016
130
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1382–1392, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Probabilistic Graph-based Dependency Parsing with Convolutional Neural Network Zhisong Zhang1,2, Hai Zhao1,2,∗, Lianhui Qin1,2 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China 2Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China {zzs2011,qinlianhui}@sjtu.edu.cn,[email protected] Abstract This paper presents neural probabilistic parsing models which explore up to thirdorder graph-based parsing with maximum likelihood training criteria. Two neural network extensions are exploited for performance improvement. Firstly, a convolutional layer that absorbs the influences of all words in a sentence is used so that sentence-level information can be effectively captured. Secondly, a linear layer is added to integrate different order neural models and trained with perceptron method. The proposed parsers are evaluated on English and Chinese Penn Treebanks and obtain competitive accuracies. 1 Introduction Neural network methods have shown great promise in the field of parsing and other related natural language processing tasks, exploiting more complex features with distributed representation and non-linear neural network (Wang et al., 2013; Wang et al., 2014; Cai and Zhao, 2016; Wang et al., 2016). In transition-based dependency parsing, neural models that can represent the partial or whole parsing histories have been explored (Weiss et al., 2015; Dyer et al., 2015). While for graphbased parsing, on which we focus in this work, Pei et al. (2015) also show the effectiveness of neural methods. ∗Corresponding author. This paper was partially supported by Cai Yuanpei Program (CSC No. 201304490199 and No. 201304490171), National Natural Science Foundation of China (No. 61170114 and No. 61272248), National Basic Research Program of China (No. 2013CB329401), Major Basic Research Program of Shanghai Science and Technology Committee (No. 15JC1400103), Art and Science Interdisciplinary Funds of Shanghai Jiao Tong University (No. 14JCRZ04), and Key Project of National Society Science Foundation of China (No. 15-ZDA041). The graph-based parser generally consists of two components: one is the parsing algorithm for inference or searching the most likely parse tree, the other is the parameter estimation approach for the machine learning models. For the former, classical dynamic programming algorithms are usually adopted, while for the latter, there are various solutions. Like some previous neural methods (Socher et al., 2010; Socher et al., 2013), to tackle the structure prediction problems, Pei et al. (2015) utilize a max-margin training criterion, which does not include probabilistic explanations. Re-visiting the traditional probabilistic criteria in log-linear models, this work utilizes maximum likelihood for neural network training. Durrett and Klein (2015) adopt this method for constituency parsing, which scores the anchored rules with neural models and formalizes the probabilities with tree-structured random fields. Motivated by this work, we utilize the probabilistic treatment for dependency parsing: scoring the edges or high-order sub-trees with a neural model and calculating the gradients according to probabilistic criteria. Although scores are computed by a neural network, the existing dynamic programming algorithms for gradient calculation remain the same as those in log-linear models. Graph-based methods search globally through the whole space for trees and get the highestscored one, however, the scores for the sub-trees are usually locally decided, considering only surrounding words within a limited-sized window. Convolutional neural network (CNN) provides a natural way to model a whole sentence. By introducing a distance-aware convolutional layer, sentence-level representation can be exploited for parsing. We will especially verify the effectiveness of such representation incorporated with window-based representation. Graph-based parsing has a natural extension 1382 through raising its order and higher-order parsers usually perform better. In previous work on highorder graph-parsing, the scores of high-order subtrees usually include the lower-order parts in their high-order factorizations. In traditional linear models, combining scores can be implemented by including low-order features. However, for neural models, this is not that straightforward because of nonlinearity. A straightforward strategy is simply adding up all the scores, which in fact works well; another way is stacking a linear layer on the top of the representation from various already-trained neural parsing models of different orders. This paper presents neural probabilistic models for graph-based projective dependency parsing, and explores up to third-order models. Here are the three highlights of the proposed methods: • Probabilistic criteria for neural network training. (Section 2.2) • Sentence-level representation learned from a convolutional layer. (Section 3.2) • Ensemble models with a stacked linear output layer. (Section 3.3) Our main contribution is exploring sub-tree scoring models which combine local features with a window-based neural network and global features from a distance-aware convolutional neural network. A free distribution of our implementation is publicly available1. The remainder of the paper is organized as follows: Section 2 explains the probabilistic model for graph-based parsing, Section 3 describes our neural network models, Section 4 presents our experiments and Section 5 discusses related work, we summarize this paper in Section 6. 2 Probabilistic Graph-based Dependency Parsing 2.1 Graph-based Dependency Parsing Dependency parsing aims to predict a dependency tree, in which all the edges connect head-modifier pairs. In graph-based methods, a dependency tree is factored into sub-trees, from single edge to multiple edges with different patterns; we will call these specified sub-trees factors in this paper. According to the sub-tree size of the factors, we can 1https://github.com/zzsfornlp/nnpgdparser h m h s m g h s m 1st order 2nd order (sibling) 3rd order (grand-sibling) Figure 1: The decompositions of factors. define the order of the graph model. Three different ordered factorizations considered in this work and their sub-tree patterns are shown in Figure 1. The score for a dependency tree (T) is defined as the sum of the scores of all its factors (p): Score(T) = X p∈T Score(p) In this way, the dependency parsing task is to find a max-scoring tree. For projective dependency parsing considered in this work, this searching problem is conquered by dynamic programming algorithms with the key assumption that the factors are scored independently. Previous work (Eisner, 1996; McDonald et al., 2005; McDonald and Pereira, 2006; Koo and Collins, 2010; Ma and Zhao, 2012) explores ingenious algorithms for decoding ranging from first-order to higher-orders. Our proposed parsers also take these algorithms as backbones and use them for inference. 2.2 Probabilistic Model With the graph factorization and inference, the remaining problems are how to obtain the scores and how to train the scoring model. For the scoring models, traditional linear methods utilize manually specified features and linear scoring models, while we adopt neural network models, which may exploit better feature representations. For the training methods, in recent neural graph-based parsers, non-probabilistic marginbased methods are usually used. However, following the maximum likelihood criteria in traditional log-linear models, we can treat it in a probabilistic way. In fact, the probabilistic treatment still utilizes the scores of sub-tree factors in graph models. As in log-linear models like Conditional Random Field (CRF) (Lafferty et al., 2001), the exponentials of scores are taken before re-normalizing, and the probability distribution over trees condi1383 tioned on a sentence X is defined as follows: Pr(T|X, θ) = 1 Z(X) exp(Score(T|θ)) Z(X) = X T ′ exp(Score(T ′|θ)) where θ represents the parameters and Z(X) is the re-normalization partition function. The intuition is that the higher the score is, the more potential or mass it will get, leading to higher probability. The training criteria will be log-likelihood in the classical setting of maximum likelihood estimation, and we define the loss for a parse tree as negative log-likelihood: L(θ) = −log Pr(Tg|X, θ) = −Score(Tg|θ) + log(Z(X)) where Tg stands for the golden parse tree. Now we need to calculate the gradients of θ according to gradient-based optimization. Focusing on the second term, we have (some conditions are left out for simplicity): ∂log(Z(X)) ∂θ = X T ′ Pr(T ′) X p∈T ′ ∂Score(p) ∂θ = X p ∂Score(p) ∂θ X T ′∈T(p) Pr(T ′) Here, T(p) is the set of trees that contain the factor p, and the inner summation is defined as the marginal probability m(p): m(p) = X T ′∈T(p) Pr(T ′) which can be viewed as the mass of all the trees containing the specified factor p. The calculation of m(p) (Paskin, 2001; Ma and Zhao, 2015) is solved by a variant of inside-outside algorithm, which is of the same complexity compared with the corresponding inference algorithms. Finally, the gradients can be represented as: ∂L(θ) ∂θ = X p ∂Score(p) ∂θ  −  p ∈Tg  + m(p)  where [p ∈Tg] is a binary value which indicates whether p is in tree Tg. Traditional models usually utilize linear functions for the Score function, which might need carefully feature engineering such as (Zhao et al., 2009a; Zhao et al., 2009b; Zhao et al., 2009c; Zhao, 2009; Zhao et al., 2013), while we adopt neural models with the probabilistic training criteria unchanged. 2.3 Training Criteria We take a further look between the maximumlikelihood criteria and the max-margin criteria. For the max-margin method, the loss is the difference between the scores of the golden tree and a predicted tree, and its sub-gradient can be written in a similar form: ∂Lm(θ) ∂θ = X p ∂Score(p) ∂θ  −  p ∈Tg  +  p ∈Tb  Here, the predicted tree Tb is the best-scored tree with a structured margin loss in the score. Comparing the derivatives, we can see that the one of probabilistic criteria can be viewed as a soft version of the max-margin criteria, and all the possible factors are considered when calculating gradients for the probabilistic way, while only wrongly predicted factors have non-zero subgradients for max-margin training. This observation is not new and Gimpel and Smith (2010) provide a good review of several training criteria. It might be interesting to explore the impacts of different training criteria on the parsing performance, and we will leave it for future research. 2.4 Labeled Parsing In a dependency tree, each edge can be given a label indicating the type of the dependency relation, this labeling procedure can be integrated directly into the parsing task, instead of a second pass after obtaining the structure. For the probabilistic model, integrating labeled parsing only needs some extensions for the inference procedure and marginal probability calculations. For the simplicity, we only consider a single label for each factor (even for high-order ones) which corresponds to Model 1 in (Ma and Hovy, 2015): the label of the edge between head and modifier word, which will only multiply O(l) to the complexity. We find this direct approach not only achieves labeled parsing in one pass, but also improves unlabeled attachment accuracies (see Section 4.3), which may benefit from the joint learning with the labels. 3 Neural Model The task for the neural models is computing the labeled scores of the factors. The inputs are the words in a factor with contexts, and the outputs are the scores for this factor to be valid in the dependency tree. We propose neural models to in1384 This is a good game . DT JJ NN Modifier Sibling Head Embdding Hidden2 Output Hidden1 s = Wsh2 + bs h2 = tanh(W2h1 + b2) h1 = tanh(W1h0 + b1) h0 Figure 2: The architecture for the basic model (second order parsing). tegrate features from both local word-neighboring windows and the entire sentence, and furthermore explore ensemble models with different orders. 3.1 Basic Local Model Architecture The basic model uses a windowbased approach, which includes only surrounding words for the contexts. Figure 2 illustrates a second-order sibling model and models of other orders adopt similar structures. It is simply a standard feed-forward neural network with two hidden layers (h1 and h2) above the embedding layer (h0), the hidden layers all adopt tanh activation function, and the output layer (noted as s) directly represents the scores for different labels. Feature Sets All the features representing the input factor are atomic and projected to embeddings, then the embedding layer is formed by concatenating them. There are three categories of features: word forms, POS (part-of-speech) tags and distances. For each node in the factor, word forms and POS tags of the surrounding words in a specified window are also considered. Special tokens for start or end of sentences, root node and unknown words are added for both word forms and POS tags. Distances can be negative or positive to represent the relative positions between the factor nodes in surface string. Take the situation for the second-order model as an example, there are three nodes in a factor: h for head, m for modifier and s for sibling. When considering three-word windows, there will be three word forms and three tags for each node and its surrounding context. m and s both have one distance feature while h does not have one as its parent does not exist in the factor. Training As stated in Section 2.2, we use the maximum likelihood criteria. Moreover, we add two L2-regularizations: one is for all the weights θ′ (biases and embeddings not included) to avoid over-fitting and another is for preventing the final output scores from growing too large. The former is common practice for neural network, while the latter is to set soft limits for the norms of the scores. Although the second term is not usually adopted, it directly puts soft constraints on the scores and improves the accuracies (about 0.1% for UAS/LAS overall) according to our primary experiments. So the final loss function will be: L′(θ) = X p  Score(p) · −  p ∈Tg  + m(p)  + λs · Score(p)2 + λm · ∥θ′∥2 where λm and λs respectively represent regularization parameters for model and scores. The training process utilizes a mini-batched stochastic gradient descent method with momentum. Comparisons Our basic model resembles the one of Pei et al. (2015), but with some major differences: probabilistic training criteria are adopted, the structures of the proposed networks are different and direction information is encoded in distance features. Moreover, they simply average embeddings in specified regions for phraseembedding, while we will include sentenceembedding in convolutional model as follows. 3.2 Convolutional Model To encode sentence-level information and obtain sentence embeddings, a convolutional layer of the whole sentence followed by a max-pooling layer is adopted. However, we intend to score a factor in a sentence and the position of the nodes should also be encoded. The scheme is to use the distance embedding for the whole convolution window as the position feature. We will take the second-order model as an example to introduce the related operations. Figure 3 shows the convolution operation for a convolution window, the input atomic features are the word forms and POS tags for each word inside the window, and the distances of only the center word (assuming an odd-sized window) to specified nodes in the factor are adopted as position features. In the example, “game-good-a” is to be scored as a second-order sibling factor, and for a 1385 This is a good game . DT VBZ DT Modifier Sibling Head Output Lexical distance v’ l = Wlvl + bl vl v’ d = Wdvd + bd vd dh=-3 dm=-1 ds=-2 Figure 3: The operations for one convolution window (second order parsing). convolution window of “This is a”, word forms and corresponding POS tags are projected to embeddings and concatenated as the lexical vector vl, the distances of the center word “is” to all the three nodes in the factor are also projected to embeddings and concatenated as the distance vector vd, then these two vectors go through difference linear transformations into the same dimension and are combined together through element-wise addition or multiplication. In general, assuming after the projection layer, embeddings of the word forms and POS tags of the sentence are represented as [w0, w1, ..., wn−1] and [p0, p1, ..., pn−1]. Those embeddings in the basic model may be reused here by sharing the embedding look-up table. The second-order sibling factor to be scored has nodes with indexes of m (modifier), h (head) and s (sibling). The distance embeddings are denoted by d, which can be either negative or positive. These distance embeddings are different from the ones in the basic model, because here we measure the distances between the convolution window (its center word) and factor nodes, while the distances between nodes inside the factors are measured in the basic model. For a specified window [i : j], always assuming an odd number sized window, and the center token is indexed to c = i+j 2 , the vl and vd are obtained through simple concatenation: vl = [wi, pi, wi+1, pi+1, ..., wj, pj] vd = [dc−h, dc−m, dc−s] then vl and vd go through difference linear transformations into same dimension space: v′l, v′d ∈ Rn, where n is also the dimension of the output vector vo for the window. The linear operations can be expressed as: v′l = Wl · vl + bl v′d = Wd · vd + bd The final vector vo is obtained by element-wise operations of v′l and v′d. We consider two strategies: (1) add: simple element-wise addition, (2) mul: element-wise multiplication with v′d activated by tanh. They can be formalized as: vo-add = v′l ⊕v′d vo-mul = v′l ⊙tanh(v′d) All the windows whose center-located word is valid (exists) in the sentence are considered and we will get a sequence of convolution outputs whose number is the same as the sentence length. The convolution outputs (all vo) are collapsed into one global vector vg using a standard max-pooling operation. Finally, for utilizing the sentence-level representation in the basic model, we can either replace the original first hidden layer h1 with vg or concatenate vg to h1 for combining local and global features. 3.3 Ensemble Models For higher-order dependency parsing, it is a standard practice to include the impact of lower-order parts in the scoring of higher-order factors, which actually is an ensemble method of different order models for scoring. A simple adding scheme is often used. For nonlinear neural models, we use an explicit adding method. For example, in third-order parsing, the final score for the factor (g, h, m, s) will be: sadd(g, h, m, s) = so3(g, h, m, s) + so2(h, m, s) + so1(h, m) Here, g, h, m and s represent the grandparent, head, modifier and sibling nodes in the grandsibling third-order factor; so1, so2 and so3 stand for the corresponding lower-order scores from first, second and third order models, respectively. We notice that ensemble or stacking methods for dependency parsing have explored in previous work (Nivre and McDonald, 2008; Torres Martins et al., 2008). Recently, Weiss et al. (2015) stack a linear layer for the final scoring in a single model, and we extend this method to combine multiple models by stacking a linear layer on their output and hidden layers. The simple adding scheme can 1386 be viewed as adopting a final layer with specially fixed weights. For each model to be combined, we concatenate the output layer and all hidden layers (except embedding layer h0): vall = [s, h1, h2] All vall from different models are again concatenated to form the input for the final linear layer and the final scores are obtained through a linear transformation (no bias adding): vcombine = [vall-o1, vall-o2, vall-o3] scombine = Wcombine · vcombine We no longer update weights for the underlying neural models, and the learning of the final layer is equally training a linear model, for which structured average perceptron (Collins, 2002; Collins and Roark, 2004) is adopted for simplicity. This ensemble scheme can be extended in several ways which might be explored in future work: (1) feed-forward network can be stacked rather than a single linear layer, (2) traditional sparse features can also be concatenated to vcombine to combine manually specified representations with distributed neural representations as in (Zhang and Zhang, 2015). 4 Experiments The proposed parsers are evaluated on English Penn Treebank (PTB) and Chinese Penn Treebank (CTB). Unlabeled attachment scores (UAS), labeled attachment scores (LAS) and unlabeled complete matches (CM) are the metrics. Punctuations2 are ignored as in previous work (Koo and Collins, 2010; Zhang and Clark, 2008). For English, we follow the splitting convention for PTB3: sections 2-21 for training, 22 for developing and 23 for test. We prepare three datasets of PTB, using different conversion tools: (1) Penn2Malt3 and the head rules of Yamada and Matsumoto (2003), noted as PTB-Y&M; (2) dependency converter in Stanford parser v3.3.0 with Stanford Basic Dependencies (De Marneffe et al., 2006), noted as PTB-SD; (3) LTH Constituentto-Dependency Conversion Tool4 (Johansson and 2Tokens whose gold POS tags are one of {“ ” : , .} for PTB or PU for CTB. 3http://stp.lingfil.uu.se/˜nivre/research/Penn2Malt.html 4http://nlp.cs.lth.se/software/treebank converter Nugues, 2007), noted as PTB-LTH. We use Stanford POS tagger (Toutanova et al., 2003) to get predicted POS tags for development and test sets, and the accuracies for their tags are 97.2% and 97.4%, respectively. For Chinese, we adopt the splitting convention for CTB5 described in (Zhang and Clark, 2008). The dependencies (noted as CTB), are converted with the Penn2Malt converter. Gold segmentation and POS tags are used as in previous work. 4.1 Settings Settings of our models will be described in this sub-section, including pre-processing and initializations, hyper-parameters, and training details. We ignore the words that occur less than 3 times in the training treebank and use a special token to replace them. For English parsing, we initialize word embeddings with word vectors trained on Wikipedia using word2vec (Mikolov et al., 2013); all other weights and biases are initialized randomly with uniform distribution. For the structures of neural models, all the embeddings (word, POS and distances) have dimensions of 50. For basic local models, h1 and h2 are set to 200 and 100, and the local window size is set to 7. For convolutional models, a three-word-sized window for convolution is specified, and convolution output dimension (number of filters) is 100. When concatenating the convolution vector (after pooling) to h1, it will make the first hidden layer’s dimension 300. For the training of neural network, we set the initial learning rate to 0.1 and the momentum to 0.6. After each iteration, the parser is tested on the development set and if the accuracy decreases, the learning rate will be halved. The learning rate will also be halved if no decreases of the accuracy for three epochs. We train the neural models for 12 epochs and select the one that performs best on the development set. The regularization parameters λm and λs are set to 0.0001 and 0.001. For the perceptron training of the ensemble model, only one epoch is enough based on the results of the development set. The runtime of the model is influenced by the hyper-parameter setting. According to our experiments, using dual-core on 3.0 GHz i7 CPU, the training costs 6 to 15 hours for different-order models and the testing is comparably efficient as recent neural graph-parsers. The calculation of the 1387 Method UAS LAS CM Basic (first-order) Unlabeled 91.53 – 42.82 Labeled 92.13 89.60 45.06 Labeled+pre-training 92.19 89.73 45.18 Convolutional (first-order) replace-add 92.26 89.83 44.76 replace-mul 92.02 89.61 44.24 concatenate-add 92.63 90.20 46.18 concatenate-mul 92.33 89.83 44.94 Higher-orders o2-nope 92.85 90.51 49.65 o2-adding 93.47 91.13 51.41 o2-perceptron 93.63 91.39 51.53 o3-nope 92.47 90.01 49.06 o3-adding 93.70 91.37 53.53 o3-perceptron 93.51 91.20 51.76 Table 1: Effects of the components, on PTB-SD development set. convolution model approximately takes up 40% of all computations. The convolution operation indeed costs more, but the lexical parts v′l of the convolution do not concern the factors and are computed only once for one sentence, which makes it less computationally expensive. 4.2 Pruning For high-order parsing, the computation cost rises in proportion to the length of the sentence, and it will be too expensive to calculate scores for all the factors. Fortunately, many edges are quite unlikely to be valid and can be pruned away using low-order models. We follow the method of Koo and Collins (2010) and directly use the first-order probabilistic neural parser for pruning. We compute the marginal probability m(h, m) for each edge and prune away the edges whose marginal probability is below ϵ × maxh′ m(h′, m). ϵ means the pruning threshold that is set to 0.0001 for second-order. For third-order parsing, considering the computational cost, we set it to 0.001. 4.3 Model Analysis This section presents experiments to verify the effectiveness of the proposed methods and only the PTB-SD development set will be used in these experiments, which fall into three groups concerning basic models, convolutional models and ensemble ones, as shown in Table 1. The first group focuses on the basic local models of first order. The first two, Unlabeled and Labeled, do not use pre-training vectors for initialization, while the third, Labeled+pre-training, utilizes them. The Unlabeled does not utilize the 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 5 10 15 20 25 30 F1 Dependency Length labeled+pre-training (no CNN) replace-add (only CNN) concatenate-add (plus CNN) Figure 4: F1 measure of different dependency lengths, on PTB-SD development set. labels in training set and its model only gives one dependency score (we do not train a second stage labeling model, so the LAS of the unlabeled one is not available) and the Labeled directly predicts the scores for all labels. We can see that labeled parsing not only demonstrates the convenience of outputting dependency relations and labels for once, but also obtains better parsing performances. Also, we observe that pre-trained word vectors bring slight improvements. Pre-trained initialization and labeled parsing will be adopted for the next two groups and the rest experiments. Next, we explore the effectiveness of the CNN enhancement. In the four entries of this group, concatenate or replace means whether to concatenate the sentence-level vector vg to the first hidden layer h1 or just replace it (just throw away the representation from basic models), add or mul means to use which way for attaching distance information. Surprisingly, simple adding method surpasses the more complex multiplication-withactivation method, which might indicate that the direct activation operation may not be suitable for encoding distance information. With no surprises, the concatenating method works better because it combines both the local window-based and global sentence-level information. We also explore the influences of the convolution operations on dependencies of different lengths, as shown in Figure 4, the convolutional methods help the decisions of long-range dependencies generally. For the highorder parsing in the rest of this paper, we will all adopt the concatenate-add setting. In the third group, we can see that high-order parsing brings significant performance improvement. For high-order parsing, three ensemble schemes are examined: no combination, adding 1388 PTB-Y&M PTB-SD PTB-LTH CTB Methods UAS LAS CM UAS LAS CM UAS LAS CM UAS LAS CM Graph-NN:proposed o3-adding 93.20 92.12 48.92 93.42 91.29 50.37 93.14 90.07 43.38 87.55 86.19 35.65 o3-perceptron 93.31 92.23 50.00 93.42 91.26 49.92 93.12 89.53 43.83 87.65 86.17 36.07 Graph-NN:others Pei et al. (2015) 93.29 92.13 – – – – – – – – – – Fonseca and Alu´ısio (2015) – – – – – – 91.6– 88.9– – – – – Zhang and Zhao (2015) – – – – – – 92.52 – 41.10 86.01 – 31.88 Graph-Linear Koo and Collins (2010) 93.04 – – – – – – – – – – – Martins et al. (2013) 93.07 – – 92.82 – – – – – – – – Ma and Zhao (2015) 93.0– – 48.8– – – – – – – 87.2– – 37.0– Transition-NN Chen and Manning (2014) – – – 91.8– 89.6– – 92.0– 90.7– – 83.9– 82.4– – Dyer et al. (2015) – – – 93.1– 90.9– – – – – 87.2– 85.7– – Weiss et al. (2015) – – – 93.99 92.05 – – – – – – – Zhou et al. (2015) 93.28 92.35 – – – – – – – – – – Table 2: Comparisons of results on the test sets. and stacking another linear perceptron layer (with the suffixes of -nope, -adding and -perceptron respectively). The results show that model ensemble improves the accuracies quite a few. For thirdorder parsing, the no-combination method performs quite poorly compared to the others, which may be caused by the relative strict setting of the pruning threshold. Nevertheless, with model ensemble, the third-order models perform better than the second-order ones. Though the perceptron strategy does not work well for third-order parsing in this dataset, it is still more general than the simple adding method, since the latter can be seen as a special parameter setting of the former. 4.4 Results We show the results of two of the best proposed parsers: third-order adding (o3-adding) and thirdorder perceptron (o3-perceptron) methods, and compare with the reported results of some previous work in Table 2. We compare with three categories of models: other Graph-based NN (neural network) models, traditional Graph-based Linear models and Transition-based NN models. For PTB, there have been several different dependency converters which lead to different sets of dependencies and we choose three of the most popular ones for more comprehensive comparisons. Since not all work report results on all of these dependencies, some of the entries might be not available. From the comparison, we see that the proposed parser has output competitive performance for different dependency conversion conventions and treebanks. Compared with traditional graphbased linear models, neural models may benefit from better feature representations and more general non-linear transformations. The results and comparisons in Table 2 demonstrate the proposed models can obtain comparable accuracies, which show the effectiveness of combining local and global features through windowbased and convolutional neural networks. 5 Related Work CNN has been explored in recent work of relation classification (Zeng et al., 2014; Chen et al., 2015), which resembles the task of deciding dependency relations in parsing. However, relation classification usually involves labeling for given arguments and seldom needs to consider the global structure. Parsing is more complex for it needs to predict structures and the use of CNN should be incorporated with the searching algorithms. Neural network methods have been proved effective for graph-based parsing. Lei et al. (2014) explore a tensor scoring method, however, it needs to combine scores from linear models and we are not able to compare with it because of different datasets (they take datasets from CoNLL shared task). Zhang and Zhao (2015) also explore a probabilistic treatment, but its model may give mass to illegal trees or non-trees. Fonseca and Alu´ısio (2015) utilize CNN for scoring edges, though only explore first-order parsing. Its model is based on head selection for each modifier and might be difficult to be extended to high-order parsing. Recently, several neural re-ranking models, like Inside-Outside Recursive Neural Network 1389 (Le and Zuidema, 2014) and Recursive CNN (Zhu et al., 2015), are utilized for capturing features with more contexts. However, re-ranking models depend on the underlying base parsers, which might already miss the correct trees. Generally, the re-ranking techniques play a role of additional enhancement for basic parsing models, and therefore they are not included in our comparisons. The conditional log-likelihood probabilistic criterion utilized in this work is actually a (conditioned) Markov Random Field for tree structures, and it has been applied to parsing since long time ago. Johnson et al. (1999) utilize the Markov Random Fields for stochastic grammars and gradient based methods are adopted for parameter estimations, and Geman and Johnson (2002) extend this with dynamic programming algorithms for inference and marginal-probability calculation. Collins (2000) uses the same probabilistic treatment for re-ranking and the denominator only includes the candidate trees which can be seen as an approximation for the whole space of trees. Finkel et al. (2008) utilize it for feature-based parsing. The probabilistic training criterion for linear graphbased dependency models have been also explored in (Li et al., 2014; Ma and Zhao, 2015). However, these previous methods usually exploit loglinear models utilizing sparse features for input representations and linear models for score calculations, which are replaced by more sophisticated distributed representations and neural models, as shown in this work. 6 Conclusions This work presents neural probabilistic graphbased models for dependency parsing, together with a convolutional part which could capture the sentence-level information. With distributed vectors for representations and complex non-linear neural network for calculations, the model can effectively capture more complex features when deciding the scores for sub-tree factors and experiments on standard treebanks show that the proposed techniques improve parsing accuracies. References Deng Cai and Hai Zhao. 2016. Neural word segmentation learning for Chinese. In Proceedings of ACL, Berlin, Germany, August. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP, pages 740–750, Doha, Qatar, October. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of ACL, pages 167–176, Beijing, China, July. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 111–118, Barcelona, Spain, July. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 25–70. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Marie-Catherine De Marneffe, Bill MacCartney, Christopher D Manning, et al. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC, volume 6, pages 449–454. Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of ACL, pages 302–312, Beijing, China, July. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL, pages 334– 343, Beijing, China, July. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics, pages 340–345, Copenhagen, August. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL, pages 959–967, Columbus, Ohio, June. Erick Fonseca and Sandra Alu´ısio. 2015. A deep architecture for non-projective dependency parsing. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 56–61, Denver, Colorado, June. Stuart Geman and Mark Johnson. 2002. Dynamic programming for parsing and estimation of stochastic unification-based grammars. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 279–286, Philadelphia, Pennsylvania, USA, July. 1390 Kevin Gimpel and Noah A. Smith. 2010. Softmaxmargin crfs: Training log-linear models with cost functions. In Proceedings of NAACL, pages 733– 736, Los Angeles, California, June. Richard Johansson and Pierre Nugues. 2007. Extended constituent-to-dependency conversion for english. In 16th Nordic Conference of Computational Linguistics, pages 105–112. University of Tartu. Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic ”unification-based” grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 535–541, College Park, Maryland, USA, June. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of ACL, pages 1–11, Uppsala, Sweden, July. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Phong Le and Willem Zuidema. 2014. The insideoutside recursive neural network model for dependency parsing. In Proceedings of EMNLP, pages 729–739, Doha, Qatar, October. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceedings of ACL, pages 1381–1391, Baltimore, Maryland, June. Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Ambiguity-aware ensemble training for semisupervised dependency parsing. In Proceedings of ACL, pages 457–467, Baltimore, Maryland, June. Xuezhe Ma and Eduard Hovy. 2015. Efficient innerto-outer greedy algorithm for higher-order labeled dependency parsing. In Proceedings of EMNLP, pages 1322–1328, Lisbon, Portugal, September. Xuezhe Ma and Hai Zhao. 2012. Fourth-order dependency parsing. In Proceedings of COLING, pages 785–796, Mumbai, India, December. Xuezhe Ma and Hai Zhao. 2015. Probabilistic models for high-order projective dependency parsing. arXiv preprint arXiv:1502.04174. Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In Proceedings of ACL, pages 617–622, Sofia, Bulgaria, August. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81–88. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91– 98, Ann Arbor, Michigan, June. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL, pages 950–958, Columbus, Ohio, June. Mark A Paskin. 2001. Cubic-time parsing and learning algorithms for grammatical bigram models. Technical report. Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based dependency parsing. In Proceedings of ACL, pages 313–322, Beijing, China, July. Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2010. Learning continuous phrase representations and syntactic parsing with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop. Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013. Parsing with compositional vector grammars. In Proceedings of ACL, pages 455–465, Sofia, Bulgaria, August. Andr´e Filipe Torres Martins, Dipanjan Das, Noah A. Smith, and Eric P. Xing. 2008. Stacking dependency parsers. In Proceedings of ENNLP, pages 157–166, Honolulu, Hawaii, October. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 173–180. Rui Wang, Masao Utiyama, Isao Goto, Eiichro Sumita, Hai Zhao, and Bao-Liang Lu. 2013. Converting continuous-space language models into n-gram language models for statistical machine translation. In Proceedings of EMNLP, pages 845–850, Seattle, Washington, USA, October. Rui Wang, Hai Zhao, Bao-Liang Lu, Masao Utiyama, and Eiichiro Sumita. 2014. Neural network based bilingual language model growing for statistical machine translation. In Proceedings of EMNLP, pages 189–195, Doha, Qatar, October. Peilu Wang, Yao Qian, Frank Soong, Lei He, and Hai Zhao. 2016. Learning distributed word representations for bidirectional lstm recurrent neural network. In Proceedings of NAACL, June. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of ACL, pages 323–333, Beijing, China, July. 1391 Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, volume 3, pages 195–206. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344, Dublin, Ireland, August. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graphbased and transition-based dependency parsing. In Proceedings of EMNLP, pages 562–571, Honolulu, Hawaii, October. Meishan Zhang and Yue Zhang. 2015. Combining discrete and continuous features for deterministic transition-based dependency parsing. In Proceedings of EMNLP, pages 1316–1321, Lisbon, Portugal, September. Zhisong Zhang and Hai Zhao. 2015. High-order graph-based neural dependency parsing. In Proceedings of the 29th Pacific Asia Conference on Language, Information, and Computation, pages 114– 123, Shanghai, China, October. Hai Zhao, Wenliang Chen, and Chunyu Kit. 2009a. Semantic dependency parsing of NomBank and PropBank: An efficient integrated approach via a large-scale feature selection. In Proceedings of EMNLP, pages 30–39, Singapore, August. Hai Zhao, Wenliang Chen, Chunyu Kity, and Guodong Zhou. 2009b. Multilingual dependency learning: A huge feature engineering method to semantic dependency parsing. In Proceedings of CoNLL, pages 55–60, Boulder, Colorado, June. Hai Zhao, Yan Song, Chunyu Kit, and Guodong Zhou. 2009c. Cross language dependency parsing using a bilingual lexicon. In Proceedings of ACL, pages 55– 63, Suntec, Singapore, August. Hai Zhao, Xiaotian Zhang, and Chunyu Kit. 2013. Integrative semantic dependency parsing via efficient large-scale feature selection. Journal of Artificial Intelligence Research, 46:203–233. Hai Zhao. 2009. Character-level dependencies in chinese: Usefulness and learning. In Proceedings of EACL, pages 879–887, Athens, Greece, March. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structuredprediction model for transition-based dependency parsing. In Proceedings of ACL, pages 1213–1222, Beijing, China, July. Chenxi Zhu, Xipeng Qiu, Xinchi Chen, and Xuanjing Huang. 2015. A re-ranking model for dependency parser with recursive convolutional neural network. In Proceedings of ACL, pages 1159–1168, Beijing, China, July. 1392
2016
131
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1393–1402, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Search-Based Dynamic Reranking Model for Dependency Parsing Hao Zhou† Yue Zhang‡ Shujian Huang† Junsheng Zhou∗ Xin-Yu Dai† Jiajun Chen† †State Key Laboratory for Novel Software Technology, Nanjing University, China ‡Singapore University of Technology and Design, Singapore ∗Nanjing Normal University, China [email protected], yue [email protected] {huangshujian,daixinyu,chenjj}@nju.edu.cn, [email protected] Abstract We propose a novel reranking method to extend a deterministic neural dependency parser. Different to conventional k-best reranking, the proposed model integrates search and learning by utilizing a dynamic action revising process, using the reranking model to guide modification for the base outputs and to rerank the candidates. The dynamic reranking model achieves an absolute 1.78% accuracy improvement over the deterministic baseline parser on PTB, which is the highest improvement by neural rerankers in the literature. 1 Introduction Neural network models have recently been exploited for dependency parsing. Chen and Manning (2014) built a seminal model by replacing the SVM classifier at the transition-based MaltParser (Nivre et al., 2007) with a feed-forward neural network, achieving significantly higher accuracies and faster speed. As a local and greedy neural baseline, it does not outperform the best discrete-feature parsers, but nevertheless demonstrates strong potentials for neural network models in transition-based dependency parsing. Subsequent work aimed to improve the model of Chen and Manning (2014) in two main directions. First, global optimization learning and beam search inference have been exploited to reduce error propagation (Weiss et al., 2015; Zhou et al., 2015). Second, recurrent neural network models have been used to extend the range of neural features beyond a local window (Dyer et al., 2015; Ballesteros et al., 2015). These methods give accuracies that are competitive to the best results in the literature. John loves Mary (a) Base Tree John loves Mary (b) Tree 1 John loves Mary (c) Tree 2 1.0 S 1.0 S 0.4 S 0.5 L 0.9 L 0.3 L 1.0 S 0.5 L Revise 1 Tree 1 Tree 2 Base Tree Revise 2 1.0 S 1.0 S 0.3 L 1.0 S 0.5 R 1.0 S 1.0 S (d) 2-step action revising process for sentence “John loves Mary”. Numbers before actions are the probabilities for that action. Figure 1: Example action revising process. S, L, R stand for the SHIFT, LEFT, RIGHT actions, respectively (Section 2). Another direction to extend a baseline parser is reranking (Collins and Koo, 2000; Charniak and Johnson, 2005; Huang, 2008). Recently, neural network models have been used to constituent (Socher et al., 2013; Le et al., 2013) and dependency (Le and Zuidema, 2014; Zhu et al., 2015) parsing reranking. Compared with rerankers that rely on discrete manual features, neural network rerankers can potentially capture more global information over whole parse trees. Traditional rerankers are based on chart parsers, which can yield exact k-best lists and forests. For reranking, this is infeasible for the transitionbased neural parser and neural reranker, which 1393 have rather weak feature locality. In addition, kbest lists from the baseline parser are not necessarily the best candidates for a reranker. Our preliminary results show that reranking candidates can be constructed by modifying unconfident actions in the baseline parser output, and letting the baseline parser re-decode the sentence from the modified action. In particular, revising two incorrect actions of the baseline parser yields oracle with 97.79% UAS, which increases to 99.74% by revising five actions. Accordingly, we design a novel search-based dynamic reranking algorithm by revising baseline parser outputs. For example, the sentence: “John loves Mary”, the baseline parser generates a base tree (Figure 1a) using 5 shift-reduce actions (Figure 1d) of Section 2. The gold parse tree can be obtained by a 2-step action revising process: Base Tree Revise 1 −−−−→Tree 1 Revise 2 −−−−→Tree 2 As shown in Figure 1d, we first revise the least confident action S of the base tree, running the baseline parser again from the revised action to obtain tree 1. This corrects the John ↶loves dependency arc. Then we obtain the gold parsing tree (tree 2) by further revising the least confident action in tree 1 on the second action sequence. Rather than relying on the baseline model scores alone for deciding the action to revise (static search), we build a neural network model to guide which actions to revise, as well as to rerank the output trees (dynamic search). The resulting model integrates search and learning, yielding the minimum amount of candidates for the best accuracies. Given the extensively fast speed of the baseline parser, the reranker can be executed with high efficiency. Our dynamic search reranker has two main advantages over the static one: the first is training diversity, the dynamic reranker searches over more different structurally diverse candidate trees, which allows the reranker to distinguish candidates more easily; the second is reranking oracle, with the guidance of the reranking model, the dynamic reranker has a better reranking oracle compared to the static reranker. On WSJ, our dynamic reranker achieved 94.08% and 93.61% UAS on the development and test sets, respectively, at a speed of 16.1 sentences per second. It yields a 0.44% accuracy improvement (+1.78%) from the same number of candi… … … … … x h oact olabel W1 W2 W3 Output Figure 2: Hierarchical neural parsing model. dates, compared to a static reranker (+1.34%), obtaining the largest accuracy improvement among related neural rerankers. 2 Baseline Dependency Parser Transition-based dependency parsers scan an input sentence from left to right, performing a sequence of transition actions to predict its parse tree (Nivre, 2008). We employ the arc-standard system (Nivre et al., 2007), which maintains partially-constructed outputs using a stack, and orders the incoming words in the sentence in a queue. Parsing starts with an empty stack and a queue consisting of the whole input sentence. At each step, a transition action is taken to consume the input and construct the output. Formally, a parsing state is denoted as ⟨j, S, L⟩, where S is a stack of subtrees [... s2, s1, s0], j is the head of the queue (i.e. [ q0 = wj, q1 = wj+1 · · · ]), and L is a set of dependency arcs that has been built. At each step, the parser chooses one of the following actions: • SHIFT (S): move the front word wj from the queue onto the stacks. • LEFT-l (L): add an arc with label l between the top two trees on the stack (s1 ←s0), and remove s1 from the stack. • RIGHT-l (R): add an arc with label l between the top two trees on the stack (s1 →s0), and remove s0 from the stack. Given the sentence “John loves Mary”, the gold standard action sequence is S, S, L, S, R. 2.1 Model Chen and Manning (2014) proposed a deterministic neural dependency parser, which rely on dense embeddings to predict the optimal actions at each step. We propose a variation of Chen and Manning 1394 (2014), which splits the output layer into two hierarchical layers: the action layer and dependency label layer. The hierarchical parser determines a action in two steps, first deciding the action type, and then the dependency label (Figure 2). At each step of deterministic parsing, the neural model extracts n atomic features from the parsing state. We adopt the feature templates of Chen and Manning (2014). Every atomic feature is represented by a feature embedding ei ∈Rd, An input layer is used to concatenate the n feature embeddings into a vector x = [e1; e2 . . . en], where x ∈ Rd·n. Then x is mapped to a dh-dimensional hidden layer h by a mapping matrix W1 ∈Rdh×d·n and a cube activation function for feature combination: h = (W1x + b1)3 (1) Our method is different from Chen and Manning (2014) in the output layer. Given the hidden layer h, the action type output layer oact and the label output layer olabel(ai) of the action type ai are computed as oact = W2h (2) olabel(ai) = W i 3h , (3) Where W2 ∈Rda×dh is the mapping matrix from the hidden layer to the action layer, and da is the number of action types. W i 3 ∈Rdlabel×dh is the mapping matrix from the hidden layer to the corresponding label layer, dlabel is the number of dependency labels. The probability of a labeled action yi,j given its history Acts and input x is computed as: p(yi,j | x, Acts) = p(ai | x, Acts) × p(lj | x, Acts, ai) (4) where p(ai | x, Acts) = eoi act Pda k=1 eok act (5) p(lj | x, Acts, ai) = eoj label(ai) Pdlabel k=1 eok label(ai) , (6) Here ai is the ith action in the action layer, and lj is the jth label in the label layer for ai. In training, we use the cross-entropy loss to maximum the probability of training data A: L(θ) = − X yi,j∈A log p(yi,j | x, Acts) (7) Experiments show that our hierarchical neural parser is both faster and slightly accurate than the original neural parser. 3 Reranking Scorer We adopt the recursive convolutional neural network (RCNN) of Zhu et al. (2015) for scoring full trees. Given a dependency subtree rooted at h, ci (0 < i ≤L) is the ith child of h. The dependency arc (h, ci) is represented by: zi = tanh(W (h,ci)pi) , (8) where pi = wh ⊕xci ⊕d(h,ci) (9) Here pi ∈Rn is the concatenation of head word embedding wh, child phrase representation xci and the distance embeddings d(h,ci). W (h,ci) ∈ Rm×n is a linear composition matrix, which depends on the POS tags of h and ci. The subtree phrase representation xh are computed using a max-pooling function on rows, over the matrix of arc representations Zh. Zh = [z1, z2, . . . , zL] (10) xh j = max i Zh j,i, 0 < j < m (11) The subtree with the head h is scored by: score(h) = L X i=1 vh,cizi (12) Here, vh,ci is the score vector, which is a vector of parameters that need to be trained. The score of the whole dependency tree y is computed as: st(x, y, Θ) = X w∈y score(w), (13) where w is the node in tree y and Θ denotes the set of parameters in the network. 4 Search-based Dynamic Reranking for Dependency Parsing Using the hierarchical parser of Section 2 as the baseline parser, we propose a search-based dynamic reranking model, which integrates search and learning by searching the reranking candidates dynamically, instead of limiting the scope to a fixed k-best list. The efficiency of the reranking model is guaranteed by 3 properties of the baseline parser, namely revising efficiency, probability diversity and search efficiency. 1395 Revising Depth UAS LAS 0 92.28 91.15 1 95.76 94.42 2 97.79 96.63 3 98.77 97.55 4 99.39 98.15 5 99.74 98.47 Table 1: Oracle of the baseline parser after revising actions. Revising depth is the maximum number of revised actions for one sentence. Action Type Num Average Probability Gold Shift 39194 99.38% Right 19477 98.90% Left 19556 99.61% Incorrect Shift 968 84.96% Right 746 85.88% Left 338 85.03% Table 2: Average action probabilities. 4.1 Properties of the Baseline Parser To demonstrate the above three properties, we give some preliminary results for the baseline. To parse the 1,695 sentences in Section 22 of WSJ, our baseline parser needs to perform 78,227 shiftreduce actions. During the process, if we correct every encountered incorrectly determined action and let the baseline parser re-decode the sentence from the point, we need to revise 2,052 actions, averaging 1.2 actions per sentence. In other words, the baseline parser can parse the 1,695 sentences correctly with 2,052 action being revised. Note that the revise operation is required to change the action type (i.e. S, L). After revising the action type, the optimal dependency label will be chosen for parsing by the hierarchical baseline parser. We only modify the action type in the revising process. Thus the modified trees are always structurally different instead of only with different dependency labels compared to the original one, which guarantees structured diversity. Revising Efficiency It can be seen from Table 1 that revising one incorrect action results in 3.5% accuracy improvement. We obtain a 99.74% UAS after a maximum 5 depth revising. Although we only revise the action type, the LAS goes up with the UAS. The property of revising efficiency suggests that high quality tree candidates can be found with a small number of changes. Probability Diversity Actions with lower probabilities are more likely to be incorrect. We compute the average probabilities of gold and incorrect actions in parsing the section 22 of WSJ (Table 2), finding that most gold actions have very high probabilities. The average probabilities of the gold actions is much higher than that of the incorrectly predicted ones, indicating that revising actions with lower probabilities can lead to better trees. Search Efficiency The fast speed of the baseline parser allows the reranker to search a large number of tree candidates efficiently. With the graph stack trick (Goldberg et al., 2013), the reranker only needs to perform partial parsing to obtain new trees. This enables a fast reranker in theory. 4.2 Search Strategy Given an output sequence of actions by the baseline parser, we revise the action with the lowest probability margin, and start a new branch by taking a new action at this point. The probability margin of an action a is computed as: p(amax)−p(a), where amax is the action taken by the baseline, which has the highest model probability. a is taken instead of amax for this branch, and the baseline parser is executed deterministically until parsing finishes, thus yielding a new dependency tree. We require that the action type must change in the revision and the most probable dependency label among all for the revised action type will be used. Multiple strategies can be used to search for the revised reranking process. For example, one intuitive strategy is best-first, which modifies the action with the lowest probability margin among all sequences of actions constructed so far. Starting from the original output of the baseline parser, modifying the action with the lowest probability margin results in a new tree. According to the best-first strategy, the action with the lowest probability margin in the two outputs will be revised next to yield the third output. The search repeats until k candidates are obtained, which are used as candidates for reranking. The best-first strategy, however, does not consider the quality of the output, which is like a greedy process. A better candidate ( with higher F1 score) is more likely to take us to the gold tree. With the best-first strategy, we revise one tree at each time. If the selected tree is not the optimal one, the revised tree will be less likely the gold one. Revising a worse output is less likely to generate the gold parse tree compared with revising a relatively better output. Our preliminary experi1396 ments confirms this intuition. As a result, we take a beam search strategy, which uses a beam to hold b outputs to modify. For each tree in beam search, most f actions with the lowest probability margin are modified, leading to b × f new trees. Here, b is the beam size, f is the revising factor. From these trees, the b best are put to the beam for the next step. Search starts with the beam containing only the original base parse, and repeats for l steps, where l is called the revising depth. The best tree will be selected from all the trees constructed. The search process for example in Figure 1 is illustrated in Figure 3, in which b = 1, f = 3 and l = 2. At each iteration, the b best candidates can be decided by the baseline parser score alone, which is the product of the probability of each action. We call this the static search reranking. As mentioned in the introduction, the baseline model score might not be the optimal criteria to select candidates for reranking, since they may not reflect the best oracle or diversity. We introduce a dynamic search strategy instead, using the reranking model to calculate heuristic scores for guiding the search. 4.3 Search-Based Dynamic Reranking Doppa et al. (2013) propose that structuredprediction by learning guide search should maintain two different scoring functions, a heuristic function for guiding search and a cost function for obtaining the best output. Following Doppa et al. (2013), we use the RCNN in Section 3 to yield two different scores, namely a heuristic score st(x, y, Θh) to guide the search of revising, and a cost score st(x, y, Θc) to select the best tree output. Denote b(i) as the beam at i-th step of search, k-best candidates in the beam of i + 1 step is: b(i + 1) = arg K c∈c(i) (st(x, c, Θh) + sb(x, c)), (14) where c(i) denotes the set of newly constructed trees by revising trees in b(i), sb(x, c) is the baseline model score and arg K leaves the k best candidate trees to the next beam. Finally, the output tree yi of reranking is selected from all searched trees C in the revising process yi = arg max c∈C (st(x, c, Θc) + sb(x, c)) (15) Interpolated Reranker In testing, we also adopt the popular mixture reranking strategy (Hayashi et al., 2013; Le and Mikolov, 2014), Algorithm 1: Training Algorithm for the Search-Based Dynamic Reranking. Input: Sentence x, Gold Trees y Output: Θh, Θc for iter ←1 to N do Dh = []; Dk = []; foreach (x, y) ∈(x, y) do bestHScoreT = null; bestCScoreT = null; bestUAST = null; initTree = BASELINEPARSE(x); b1 = [initTree]; b2 = []; for d ←1 to depth do foreach t ∈b1 do revisedActs = SEEK (t); revisedTrees = REVISE (t, revisedActs); bestK = SORT (revisedTrees, Θh ) b2.ADD (bestK); bestHScoreT = MAXSCORE (bestHScoreT, revisedTrees, Θh); bestCScoreT = MAXSCORE (bestCScoreT, revisedTrees, Θc); bestUAST = MAXUAS (bestUAST, revisedTrees, y) b1 = b2; b2 = []; Dh.ADD (x, bestUAST, bestTScoreT); Dc.ADD (x, y, bestCScoreT); UPDATE(Dh, Θh); UPDATE(Dc, Θc); which obtains better reranking performance by a linear combination of the reranking score and the baseline model score. yi = arg max y∈τ(xi)(β(st(xi, y, Θc) + st(x, y, Θh)) + (1 −β)sb(xi, y)) (16) Here yi is the final output tree for a sentence xi; τ(xi) returns all the trees candidates of the dynamic reranking; β ∈[0, 1] is a hyper-parameter. 4.4 Training As k-best neural rerankers (Socher et al., 2013; Zhu et al., 2015), we use the max-margin criterion to train our model in a stage-wise manner (Doppa et al., 2013). Given training data Dc = (xi, yi, ˆyi)N i=1, where xi is the sentence, ˆyi is the output tree with highest cost score and yi is the corresponding gold tree, the final training objective is to minimize the loss function J(Θc), plus a 1397 SS SS SS LL LL S S L SS LL S S R RR SS S S S L R ... beam(0) L Action Candidates 0.1 R 0.1 R 1.0 R 0.0 6.3 5.2 4.9 S S L SS LL beam(1) S S L S R beam(2) Tree Candidates Figure 3: The beam search revising process of the example in Figure 1 with b = 1, f = 3 and l = 2 l2-regularization: J(Θc) = 1 |Dc| X (xi,yi, ˆyi)∈Dc ri(Θc) + λ 2 ||Θc|| (17) ri(Θc) = max(0, st(xi, ˆyi, Θc) + ∆(yi, ˆyi) −st(xi, yi, Θc)) (18) Here, Θc is the model, st(xi, yi, Θc) is the cost reranking score for yi. ∆(yi, ˆyi) = X d∈ˆyi κ1{d /∈yi} (19) ∆(yi, ˆyi) is the structured margin loss between yi and ˆyi, measured by counting the number of incorrect dependency arcs in the tree (Goodman, 1998; Zhu et al., 2015). Given training data Dh = (xi, y′ i, ˆy′ i)N i=1 for the heuristic score model, the training objective is to minimize the loss between the tree with the best UAS y′ i and the tree with the best heuristic reranking score ˆy′ i. J(Θh) = 1 |Dh| X (xi,y′ i,ˆy′ i)∈Dh ri(Θh) + λ 2 ||Θh|| (20) ri(Θh) = max(0, st(xi, ˆy′ i, Θh)) −st(xi, y′ i, Θh) (21) The detailed training algorithm is given by Algorithm 1. AdaGrad (Duchi et al., 2011) updating with subgradient (Ratliff et al., 2007) and minibatch is adopted for optimization. 5 Experiments 5.1 Set-up Our experiments are performed using the English Penn Treebank (PTB; Marcus et al., (1993)). We follow the standard splits of PTB3, using sections 2-21 for training, section 22 for development and section 23 for final testing. Following prior work on reranking, we use Penn2Malt1 to convert constituent trees to dependency trees. Ten-fold POS jackknifing is used in the training of the baseline parser. We use the POS-tagger of Collins (2002) to assign POS automatically. Because our reranking model is a dynamic reranking model, which generates training instances during search, we train 10 baseline parsing models on the 10-fold jackknifing data, and load the baseline parser model dynamically for reranking training . We follow Chen and Manning (2014), using the set of pre-trained word embeddings with a dictionary size of 13,0002 from Collobert et al. (2011). The word embeddings were trained on the entire English Wikipedia, which contains about 631 million words. 5.2 Hyper-parameters There are two different networks in our system, namely a hierarchical feed-forward neural network for the baseline parsing and a recursive convolution network for dynamic reranking. The hyper-parameters of the hierarchical parser are set as described by Chen and Manning (2014), with the embedding size d = 50, the hidden layer size dh = 300, the regularization parameter λ = 10−8, the initial learning rate of Adagrad α = 0.01 and the batch size b = 100,000. We set the hyperparameters of the RCNN as follows: word embedding size dw rnn = 25, distance embedding size dd rnn = 25, initial learning rate of Adagrad αrnn = 0.1, regularization parameter λrnn = 10−4, margin loss discount κ = 0.1 and revising factor f = 8. 5.3 The Hierarchical Neural Parser Shown in Table 3, the proposed hierarchical base parser is 1.3 times faster, and obtains a slight accuracy improvement (Table 3) upon the parser of Chen and Manning (2014). The reason for the 1http://stp.lingfil.uu.se/ nivre/research/Penn2Malt.html 2http://ronan.collobert.com/senna/ 1398 Parser dev test Speed UAS LAS UAS LAS hiero 92.28 91.15 91.83 90.76 884.7 original 92.00 90.89 91.67 90.62 682.3 Table 3: Performance comparison between the hierarchical and original neural parsers. Speed: sentences per second. Beam Size 1 2 4 8 UAS 93.38 93.45 93.81 93.51 Oracle 96.95 97.29 97.80 97.81 K 22.57 37.16 65.8 118.7 Table 4: Accuracies of the revising reranker with different beam sizes on the development set. speed gain is that smaller output layer leads to less computation of mapping from the hidden layer to the output layer in neural networks (Morin and Bengio, 2005; Mnih and Hinton, 2009). 5.4 Development Tests For the beam search dynamic reranking model, the selection of beam size b and revising depth l affect the accuracy and efficiency of the reranker. We tune the values on the development set. Beam Size A proper beam size balances efficiency and accuracy in the search process. The reranking accuracies with different beam sizes are listed in Table 4. Here, the oracle is the best UAS among searched trees during reranking. K is the number of searched candidate trees in testing. The UAS and parsing oracle both go up with increasing the beam size. Reranking with beam size = 4 gives the best development performance. We set the final beam size as 4 in the next experiments. Revising Depth As shown in Table 5, with revising depth increasing from 1 to 3, the reranker obtains better parsing oracle. The depth of 3 gives the best UAS 93.81% on the development set. The parsing oracle stops improving with deeper revised search. This may because in the fourth search step, the high quality trees begin to fall out the beam, resulting in worse output candidates, which make the revising step yield less oracle gains. We set the search depth as 3 in the next experiments. Integrating Search and Learning Shown in Table 6, the dynamic and static rerankers both achieve significant accuracy improvements over the baseline parser. The dynamic reranker gives Revising Depth 1 2 3 4 UAS 93.22 93.50 93.81 93.53 Oracle 96.31 97.57 97.80 97.81 K 8.87 38.45 65.8 90.28 Table 5: Accuracies of the revised reranker with different revising depths on development set. Search Type UAS +UAS Oracle Dynamic 93.81 +1.53 97.80 Static 93.29 +1.01 97.61 Table 6: Comparing dynamic and the static search. much better improvement, although the oracle of dynamic reranker is only 0.2% higher than the static one. This demostrates the benefit of diversity. The candidates are always the same for static search, but the dynamic reranker searches more diverse tree candidates in different iterations of training. To further explore the impact of training diversity to dynamic reranking, we also compare the dynamic search reranker of training and testing with different revising depth. In Table 7, origin is the results by training and testing with the same depth d. Results of ts is obtained by training with d = 3, and testing with a smaller d. For example, a reranker with training d = 3 and testing d = 2 achieves better performance than with training d = 2 and testing d = 2. The testing oracle of the former reranker is lower than the later, yet the former learns more from the training instance, obtaining better parsing accuracies. This again indicates that training diversity is very important besides the oracle accuracy. Interpolated Reranker Finally, we mix the baseline model score and the reranking score by following Hayashi et al. (2013) and Zhu et al. (2015), and the mixture parameter β is optimized by searching with the step size of 0.005. With the mixture reranking trick, the dynamic reranker obtains an accuracy of 94.08% (Table 8), with an improvement of 0.28% on the development set. 5.5 Final Results Comparison with Dependency Rerankers In Table 9, we compare the search-based dynamic rerankers with a list of dependency rerankers. The reranking models of Hayashi et al. (2013) and Hayashi et al. (2011) are forest reranking models. Le and Zuidema (2014) and Zhu et al. (2015) are neural k-best reranking models. Our dynamic 1399 Depth 1 2 3 ordinary UAS 93.22 93.50 93.81 oracle 96.31 97.57 97.80 ts UAS 93.59 93.79 93.81 oracle 96.29 93.42 97.80 Table 7: Accuracies of the revised reranker with different revising depths on the development set. Type static dynamic w/o mixture 93.29 93.81 w/ mixture 93.53 94.08 Table 8: Effects of interpolated reranking. reranking model achieves the highest accuracy improvement over the baseline parser on both the development and test sets. We obtain the best performance on the development set. Zhu et al. (2015) achieved higher accuracy on the test set, but they adopted a better baseline parser than ours, which could not be used in our dynamic reranker because it is not fast enough and will make our reranker slow in practice. Comparing with Neural Dependency Parsers We also compare parsing accuracies and speeds with a number of neural network dependency parsers. Dyer et al. (2015) proposed a dependency parser with stack LSTM; Zhou et al. (2015) applied the beam search for structured dependency parsing. Both achieved significant accuracy improvements over the deterministic neural parser of Chen and Manning (2014). Our dynamic search reranker obtains a 93.61% UAS on the test set, which is higher than most of the neural parsers except Weiss et al. (2015), who employ a structured prediction model upon the neural greedy baseline, achieving very high parsing accuracy. 5.6 Results on Stanford dependencies We also evaluate the proposed static and dynamic rerankers on Staford dependency treebank. The main results are consistent with CoNLL dependency treebank with the dynamic reranker achieving a 0.41% accuracy improvement upon the static reranker on test data. But the parsing accuracy on Stanford dependency is not the state-of-the-art. We speculate that there may be two reasons. First, the baseline parsing accuracy on Stanford dependencies is lower than CoNLL. Second, all the hyper-parameters are tuned on the CoNLL data. Reranker UAS dev test Hayashi et al. (2011) N/A 92.87 (+0.97) Hayashi et al. (2013) N/A 93.12 (+0.62) Le and Zuidema (2014) N/A 93.12 (+1.09) (Zhu et al., 2015) baseline 92.45 92.35 reranking 93.50 (+1.05) 93.83 (+1.48) This work (CoNLL) baseline 92.28 91.83 dynamic 94.08 (+1.80) 93.61 (+1.78) static 93.53 (+1.25) 93.17 (+1.34) Table 9: Comparison of dependency rerankers. 6 Related Work Neural Networks Reranking A line of work has been proposed to explore reranking using neural networks. Socher et al. (2013) first proposed a neural reranker using a recursive neural network for constituent parsing. Le and Zuidema (2014) extended the neural reranker to dependency parsing using a inside-outside recursive neural network (IORNN), which can process trees both bottom-up and top-down. Zhu et al. (2015) proposed a RCNN method, which solved the problem of modeling k-ary parsing tree in dependency parsing. The neural rerankers are capable of capturing global syntax features across the tree. In contrast, the most non-local neural parser with LSTM (Dyer et al., 2015) cannot exploit global features. Different to previous neural rerankers, our work in this paper contributes on integrating search and learning for reranking, instead of proposing a new neural model. Forest Reranking Forest reranking (Huang, 2008; Hayashi et al., 2013) offers a different way to extend the coverage of reranking candidates, with computing the reranking score in the trees forests by decomposing non-local features with cube-pruning (Huang and Chiang, 2005). In contrast, the neural reranking score encodes the whole dependency tree, which cannot be decomposed for forest reranking efficiently and accurately. HC-Search Doppa et al. (2013) proposed a structured prediction model with HC-Search strategy and imitation learning, which is closely related to our work in spirit. They used the complete space search (Doppa et al., 2012) for sequence labeling tasks, and the whole search process halts after a specific time bound. Different from them, we propose a dynamic parsing reranking model based on the action revising process, which is a multi-step process by revising the least confident 1400 Type System UAS Speed Neural Zhou et al. (2015) 93.28 14.3 Dyer et al. (2015)‡ 93.30 105 Weiss et al. (2015)† 93.99 N/A Weiss et al. (2015) semi † 94.26 N/A Pei et al. (2015) 93.29 N/A Chen et al. (2015) 92.60 2.7 Chen and Manning (2014) 92.00 1013 This work dynamic 93.61 16.1 Table 10: Comparison with neural parsers. Speed: sentences per second. †: results are reported on Stanford dependencies. ‡: results are run by ourself using their codes. System UAS dev test baseline 91.80 91.41 dynamic 93.44 (+1.64) 92.95 (+1.57) static 93.09 (+1.29) 92.57 (+1.16) Table 11: Dynamic reranking results on Stanford dependencies. actions from the base output and the search stops in a given revising depth. The dynamic reranking model concentrates on extending the training diversity and testing oracle for parsing reranking, which is built on the transition-based parsing framework. 7 Conclusion In this paper, we proposed a search-based dynamic reranking model using a hierarchical neural base parser and a recursive convolutional neural score model. The dynamic model is the first reranker integrating search and learning for dependency parsing. It achieves significant accuracy improvement (+1.78%) upon the baseline deterministic parser. With the dynamic search process, our reranker obtains a 0.44% accuracy improvement upon the static reranker. The code of this paper can be downloaded from http://github. com/zhouh/dynamic-reranker. Acknowledgments We would like to thank the anonymous reviewers for their insightful comments. Xin-Yu Dai is the corresponding author of this paper. This work was supported by the Natural Science Foundation of China (61472183, 6130158, 61472191), the 863 program via 2015AA015406 and Singapore Ministratry of Education Tier 2 Grant T2MOE201301. References Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173–180. Association for Computational Linguistics. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Empirical Methods in Natural Language Processing (EMNLP). Xinchi Chen, Yaqian Zhou, Chenxi Zhu, Xipeng Qiu, and Xuanjing Huang. 2015. Transition-based dependency parsing using two heterogeneous gated recursive neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Michael Collins and Terry Koo. 2000. Discriminative reranking for natural language parsing. MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, pages 175–182. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. Janardhan Rao Doppa, Alan Fern, and Prasad Tadepalli. 2012. Output space search for structured prediction. In ICML. Janardhan Rao Doppa, Alan Fern, and Prasad Tadepalli. 2013. Hc-search: Learning heuristics and cost functions for structured prediction. In AAAI, volume 2, page 4. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics. 1401 Yoav Goldberg, Kai Zhao, and Liang Huang. 2013. Efficient implementation of beam-search incremental parsers. In ACL (2), pages 628–633. Joshua Goodman. 1998. Parsing inside-out. arXiv preprint cmp-lg/9805007. Katsuhiko Hayashi, Taro Watanabe, Masayuki Asahara, and Yuji Matsumoto. 2011. Third-order variational reranking on packed-shared dependency forests. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1479–1488. Association for Computational Linguistics. Katsuhiko Hayashi, Shuhei Kondo, and Yuji Matsumoto. 2013. Efficient stacked dependency parsing by forest reranking. Transactions of the Association for Computational Linguistics, 1:139–150. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 53–64. Association for Computational Linguistics. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In ACL, pages 586– 594. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053. Phong Le and Willem Zuidema. 2014. The insideoutside recursive neural network model for dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 729–739. Association for Computational Linguistics. Phong Le, Willem Zuidema, and Remko Scha, 2013. Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, chapter Learning from errors: Using vector-based compositional semantics for parse reranking, pages 11–19. Association for Computational Linguistics. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Advances in neural information processing systems, pages 1081–1088. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the international workshop on artificial intelligence and statistics, pages 246–252. Citeseer. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. Maltparser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95–135. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based dependency parsing. In Proc. of ACL. Nathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. 2007. (online) subgradient methods for structured prediction. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vector grammars. In In Proceedings of the ACL conference. Citeseer. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structuredprediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1213–1222, Beijing, China, July. Association for Computational Linguistics. Chenxi Zhu, Xipeng Qiu, Xinchi Chen, and Xuanjing Huang. 2015. A re-ranking model for dependency parser with recursive convolutional neural network. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. 1402
2016
132
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1403–1412, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Cross-Lingual Sentiment Classification with Bilingual Document Representation Learning Xinjie Zhou, Xianjun Wan and Jianguo Xiao Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {zhouxinjie,wanxiaojun,xiaojianguo}@pku.edu.cn Abstract Cross-lingual sentiment classification aims to adapt the sentiment resource in a resource-rich language to a resource-poor language. In this study, we propose a representation learning approach which simultaneously learns vector representations for the texts in both the source and the target languages. Different from previous research which only gets bilingual word embedding, our Bilingual Document Representation Learning model BiDRL directly learns document representations. Both semantic and sentiment correlations are utilized to map the bilingual texts into the same embedding space. The experiments are based on the multilingual multi-domain Amazon review dataset. We use English as the source language and use Japanese, German and French as the target languages. The experimental results show that BiDRL outperforms the state-of-the-art methods for all the target languages. 1 Introduction Sentiment analysis for online user-generated contents has become a hot research topic during the last decades. Among all the sentiment analysis tasks, polarity classification is the most widely studied topic. It has been proved to be invaluable in many applications, such as opinion polling (Tang et al., 2012), customer feedback tracking (Gamon, 2004), election prediction (Tumasjan et al., 2010), stock market prediction (Bollen et al., 2011) and so on. Most of the current sentiment classification systems are built on supervised machine learning algorithms which require manually labelled data. However, sentiment resources are usually unbalanced in different languages. Cross-lingual sentiment classification aims to leverage the resources in a resource-rich language (such as English) to classify the sentiment polarity of texts in a resource-poor language (such as Japanese). The biggest challenge for cross-lingual sentiment classification is the vocabulary gap between the source language and the target language. This problem is addressed with different strategies in different approaches. Wan (2009) use machine translation tools to translate the training data directly into the target language. Meng et al. (2012) and Lu et al. (2011) exploit parallel unlabeled data to bridge the language barrier. Prettenhofer and Stein (2010) use correspondence learning algorithm to learn a map between the source language and the target language. Recently, representation learning methods has been proposed to solve the cross-lingual classification problem (Xiao and Guo, 2013; Zhou et al., 2015). These methods aim to learn common feature representations for different languages. However, most of the current researches only focus on bilingual word embedding. In addition, these models only use the semantic correlations between aligned words or sentences in different languages while the sentiment correlations are ignored. In this study, we propose a cross-lingual representation learning model BiDRL which simultaneously learns both the word and document representations in both languages. We propose a joint learning algorithm which exploits both monolingual and bilingual constraints. The monolingual constraints help to model words and documents in each individual language while the bilingual constraints help to build a consistent embedding space across languages. For each individual language, we extend the paragraph vector model (Le and Mikolov, 2014) 1403 to obtain word and document embeddings. The traditional paragraph vector model is fully unsupervised without using the valuable sentiment labels. We extend it into a semi-supervised manner by forcing the positive and negative documents to fall into different sides of a classification hyperplane. Learning task-specific embedding has been proved to be effective in previous research. To address the cross-language problem, different strategies are proposed to obtain a consistent embedding space across different languages. Both sentiment and semantic relatedness are exploited while previous studies only use the semantic connection between parallel sentences or documents. The performance of BiDRL is evaluated on a multilingual multi-domain Amazon review dataset (Prettenhofer and Stein, 2010). By selecting English as the source language, a total of nine tasks are evaluated with different combinations of three different target languages and three different domains. The proposed method achieves the stateof-the-art performance on all the tasks. The main contributions of this study are summarized as follows: 1) We propose a novel representation learning method BiDRL which directly learns bilingual document representations for cross-lingual sentiment classification. Different from previous studies which only obtain word embeddings, our model can learn vector representations for both words and documents in bilingual texts. 2) Our model leverages both the semantic and sentiment correlations between bilingual documents. Not only the parallel documents but also the documents with the same sentiment are required to get similar representations. 3) Our model achieves the state-of-the-art performances on nine benchmark cross-lingual sentiment classification tasks and it consistently outperforms the existing methods by a large margin. 2 Related Work Sentiment analysis is the field of studying and analyzing people’s opinions, sentiments, evaluations, appraisals, attitudes, and emotions (Liu, 2012). Most of the previous sentiment analysis researches focus on customer reviews and classifying the sentiment polarity is the most widely studied task (Pang et al., 2002). Cross-lingual sentiment classification is a popular topic in the sentiment analysis community which aims to solve the sentiment classification task from a cross-language view. It is of great importance for the area since it can exploit the existing labeled information in a source language to build a sentiment classification system in any other target language. It saves us from manually labeling data for all the languages in the world which is expensive and time-consuming. Cross-lingual sentiment classification has been extensively studied in the very recent years. Mihalcea et al. (2007) translate English subjectivity words and phrases into the target language to build a lexicon-based classifier. Banea et al. (2010) also use the machine translation service to obtain parallel corpus. It investigates several questions based on the parallel corpus including both the monolingual sentiment classification and cross-lingual sentiment classification. Wan (2009) translates both the training data (English to Chinese) and the test data (Chinese to English) to train different models in both the source and target languages. The co-training algorithm (Blum and Mitchell, 1998) is used to combine the bilingual models together and improve the performance. In addition to the translation-based methods, several studies utilize parallel corpus or existing resources to bridge the language barrier. Balamurali (2012) use WordNet senses as features for supervised sentiment classification. They use the linked WordNets of two languages to bridge the language gap. Lu et al. (2011) consider the multilingual scenario where small amount of labeled data is available in the target language. They attempted to jointly classify the sentiment for both source language and target language. Meng et al. (2012) propose a generative cross-lingual mixture model to leverage unlabeled bilingual parallel data. Prettenhofer and Stein (2010) use the structural correspondence learning algorithm to learn a map between the source language and the target language. Xiao and Guo (2014) treat the bilingual feature learning problem as a matrix completion task. This work is also related to bilingual representation learning. Zou et al. (2013) propose to use word alignment as the constraints in bilingual word embedding. Each word in one language should be similar to the aligned words in another language. Gouws et al. (2015) propose a similar algorithm but only use sentence-level alignment. It tries to minimize a sampled L2-loss between the bag-of-words sentence vectors of the parallel cor1404 pus. Xiao and Guo (2013) learn different representations for words in different languages. Part of the word vector is shared among different languages and the rest is language-dependent. Klementiev et al. (2012) treat the task as a multi-task learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. Hermann and Blunsom (2015) propose the bilingual CVM model which directly minimizes the representation of a pair of parallel documents. The document representation is calculated with a composition function based on words. Chandar A P et al. (2014) and Zhou et al. (2015) use the autoencoder to model the connections between bilingual sentences. It aims to minimize the reconstruction error between the bag-of-words representations of two parallel sentences. Luong et al. (2015) propose the bilingual skip-gram model which leverages the word alignment between parallel sentences. Pham et al. (2015) extend the paragraph vector model to force bilingual sentences to share the same sentence vector. This study differs with the existing works in the following three aspects, 1) we exploit both the semantic and sentiment correlations of the bilingual texts. Existing bilingual embedding algorithms only use the semantic connection between parallel sentences or documents. 2) Our algorithm learns both the word and document representations. Most of the previous studies simply compute the average of the word vectors in a document. 3) Sentiment labels are used in our embedding algorithm by introducing a classification hyperplane. It not only helps to achieve better embedding performance in each individual language but also helps to bridge the language barrier. 3 Framework Firstly we introduce several notations used in BiDRL. Let S and Su denote the documents from the training dataset and the documents from the unlabeled dataset in the source language respectively. For each document d ∈S, it has a sentiment label y ∈{1, −1}. We denote the sentiment label set of all the documents in S as Y . Let T and Tu denote the documents from the test dataset and the documents from the unlabeled dataset in the target language. The documents in the training and test datasets in the source and target languages are translated into the other language using the onFigure 1: Framework of BiDRL line machine translation service - Google Translate1. We denote them as Ts (the translation of S) and St (the translation of T). We wish to learn a D-dimensional vector representation for all the documents in the dataset. The general framework of BiDRL is shown in Figure 1. After we obtain the data in the source and target languages, we propose both the monolingual and bilingual constraints to learn the model. The monolingual constraints help to model words and documents in each individual language. The bilingual constraints help to build a consistent embedding space across different languages. The joint learning framework is semi-supervised which uses the sentiment labels Y of the training documents. 3.1 Monolingual Constraints In this subsection, we describe the representation learning algorithm for the source and target languages. We start from the paragraph vector model (Le and Mikolov, 2014) which has been proved to be one of the state-of-the-art methods for document modeling. In the paragraph vector framework, both documents and words are mapped to unique vectors. Each document is treated as a unique token which is the context of all the words in the document. Therefore, each word in the doc1http://translate.google.com/ 1405 Figure 2: Paragraph vector ument can be predicted by the context tokens and the document as shown in Figure 2. The idea leads to the following function, arg max X d∈D X (w,C)∈d log p(w | C, d) (1) where D is the document set, w and C are the word and its context in a document d. The only difference between paragraph vector algorithm and the well-known word2vec algorithm (Mikolov et al., 2013) is the additional document vector. The conditional probability of predicting a word from its context is modeled via softmax which is very expensive to compute. It is usually approximately solved via negative sampling. A logbilinear model is used instead to predict whether two words are in the same context. For a word and context pair (w, C) in a document d, the objective function becomes, L1 = −log σ(vT w · 1 k + 1 · (vd + X c∈C vc))− n X i=1 Ew′∈pn(w)(log σ(−v′T w · 1 k + 1 · (vd + X c∈C vc))) (2) where σ(·) is the sigmoid function, c is a context word in C, k is the window size, vw and vc are the vectors for words and context words, vd is the vector for the document d, w′ is the negative sample from the noise distribution Pn(w). Mikolov et al. (2013) set Pn(w) as the 3 4 power of the unigram distribution which outperforms the unigram and the uniform distribution significantly. The paragraph vector model captures the semantic relations between words and documents. In addition, we hope that the vector representation of a document can directly reflect its sentiment. The document vectors associated with different sentiments should fall into different positions in the embedding space. We introduce a logistic regression classifier into the embedding algorithm. For each labeled document d with the label y, we use it to classify the sentiment based on the cross entropy loss function, L2 = −y log σ((xT vd + b) −(1 −y) log σ(−xT vd −b)) (3) where x is the weight vector for the features and b is the bias, vd is the document vector. Combining the above two loss functions, we obtain the monolingual embedding algorithm. For the source language, we get the word and document representations via arg min Ls = X d∈S∪Su X (w,C)∈d L1 + X d∈S L2 (4) The embedding for the target language is obtained similarly, arg min Lt = X d∈T∪Tu X (w,C)∈d L1 + X d∈Ts L2 (5) where the labeled dataset Ts is translated from the source language. 3.2 Bilingual Constraints The key problem of bilingual representation learning is to obtain a consistent embedding space across the source and the target languages. We propose three strategies for BiDRL to bridge the language gap. The first strategy is to share the classification hyperplane in Equation 3. The logistics regression parameter x and b are the same in the source and the target languages. By sharing the same classification parameter, bilingual documents with the same sentiment will fall into similar areas in the embedding space. Therefore, introducing the logistic regression classifier in our embedding algorithm not only helps to obtain task-specific embedding but also helps to narrow the language barrier. The second strategy is to minimize the difference between a pair of parallel documents, i.e., the 1406 original documents and the translated documents. Such word or document based regularizer is widely used in previous works (Gouws et al., 2015; Zou et al., 2013). We simply measure the differences of two documents via the Euclidean Distance. The parallel documents refer to the documents in S and T and their corresponding translations (St and Ts). It leads to the following loss function, L3 = X (ds,dt)∈(S,Ts) ∥vds −vdt∥2 + X (ds,dt)∈(St,T) ∥vds −vdt∥2 (6) where (ds, dt) is a pair of parallel documents and v∗is their vector representations. Our third strategy aims to generate similar representations for texts with the same sentiment. The traditional paragraph vector model focuses on modeling the semantic relationship between words and documents while our method aims to preserve the sentiment relationship as well. For each document d ∈S in the source language, we find K least similar documents with the same sentiment. The document similarity is measured by cosine similarity using TF-IDF features. We hope that these K documents should have similar representation with document d despite of their textual difference. We denote the K documents as Qs and their parallel documents in the target language as Qt. It leads to the following loss function, L4 = X d∈S ( X ds∈Qs ∥vds −vd∥2 + X dt∈Qt ∥vdt −vd∥2) (7) where v∗denotes the vector representation of a document. Combining the monolingual embedding algorithm and the cross-lingual correlations, we have the overall objective function as follows, arg min L = Ls + Lt + L3 + L4 (8) which can be solved using stochastic gradient descent (SGD). After learning BiDRL, we represent each document in the training and test dataset by the concatenation of its vector representation in both the source and the target languages. In particular, for each training document d ∈S, we represent it as [vd vd′] where d′ is its corresponding translation and v∗is the learned document representation in BiDRL. Similarly, for each test document d ∈T, we represent it as [vd′ vd]. Afterwards, a logistic regression classifier is trained using the concatenated feature vectors of S. The polarity of the reviews in T can be predicted by applying the classifier on the concatenated feature vectors of T. 4 Experiments 4.1 Dataset We use the multilingual multi-domain Amazon review dataset2 created by (Prettenhofer and Stein, 2010). It contains three different domains book, DVD and music. Each domain has reviews in four different languages English, German, French and Japanese. In our experiments, we use English as the source language and the rest three as target languages. Therefore, we have a total of nine tasks with different combinations of three domains and three target languages. For each task, the training and test datasets have 1000 positive reviews and 1000 negative reviews. There are also several thousand of unlabeled reviews but the quantity of them varies significantly for different tasks. Following (Prettenhofer and Stein, 2010), when there are more than 50000 unlabeled reviews we randomly selected 50000 of them, otherwise we use all the unlabeled reviews. The detailed statistics of the dataset are shown in Table 1. We translated the 2000 training reviews and 2000 test reviews into the other languages using Google Translate. Prettenhofer and Stein (2010) has already provided the translation of the test data. We only need to translate the English training data into the three target languages. All the review texts are tokenized and converted into lowercase. We use Mecab3 to segment the Japanese reviews. 4.2 Implementation In the bilingual representation learning algorithm, we set the vector size as 200 and the context windows as 10. The learning rate is set to 0.025 fol2https://www.uni-weimar.de/medien/ webis/corpora/corpus-webis-cls-10/ 3http://taku910.github.io/mecab/ 1407 Target Language Domain MT-BOW MT-PV CL-SCL BSE CR-RL Bi-PV BiDRL German book 79.68 79.90 79.50 80.27 79.89 79.51 84.14 DVD 77.92 80.09 76.92 77.16 77.14 78.60 84.05 music 77.22 80.71 77.79 77.98 77.27 82.45 84.67 French book 80.76 80.14 78.49 78.25 84.25 84.39 DVD 78.83 81.49 78.80 74.83 79.60 83.60 music 75.78 81.92 77.92 78.71 80.09 82.52 Japanese book 70.22 67.45 73.09 70.75 71.11 71.75 73.15 DVD 71.30 68.86 71.07 74.96 73.12 75.40 76.78 music 72.02 74.53 75.11 77.06 74.38 75.45 78.77 Average Accuracy 75.97 77.23 76.52 74.26 76.08 78.57 81.34 Table 2: Cross-lingual sentiment classification accuracy for the nine tasks. For all the methods, we get ten different runs of the algorithm and calculate the mean accuracy. Target Domain |SU| |TU| Language German book 50000 50000 DVD 30000 50000 music 25000 50000 French book 50000 32000 DVD 30000 9000 music 25000 16000 Japanese book 50000 50000 DVD 30000 50000 music 25000 50000 Table 1: The amount of unlabeled reviews used in the experiments. There are also 1000 positive and 1000 negative reviews both for training and test in each task, i.e. |S| = |T| = 2000. lowing word2vec and it declines with the training procedure. K is empirically chosen as 10. The algorithm runs 10 iterations on the dataset. 4.3 Baseline We introduce several state-of-the-art methods used for comparison in our experiment as follows. MT-BOW: It learns a classifier in the source language using bag-of-words features and the test data is translated into the source language via Google Translate. We directly use the experimental results reported in (Prettenhofer and Stein, 2010). MT-PV: We translate the training data into the target language and also translate the test data into the source language. In both the source and target languages, we use the paragraph vector model to learn the vector representation of the documents. Therefore, each document can be represented by the concatenation of the vector in two languages. A logistic regression classifier is trained using the concatenated feature vectors similarly to BiDRL. MT-PV can be regarded as a simplified version of BiDRL without the L2, L3 and L4 regularizers. CL-SCL: It is the cross-lingual structural correspondence learning algorithm proposed by (Prettenhofer and Stein, 2010). It learns a map between the bag-of-words representations in the source and the target languages. It also leverages Google Translate to obtain the word translation oracle. BSE: It is the bilingual embedding method of (Tang et al., 2012). It aims to learn two different mapping matrices for the source and target languages. The two matrices map the bag-of-words representations in the source and the target languages into the same feature space. Tang et al. (2012) only report their results on 6 of the 9 tasks. CR-RL: It is the bilingual word representation learning method of (Xiao and Guo, 2013). It learns different representations for words in different languages. Part of the word vector is shared among different languages and the rest is languagedependent. The document representation is calculated by taking average over all words in the document. Bi-PV: Pham et al. (2015) extended the paragraph vector model into bilingual setting by sharing the document representation of a pair of parallel documents. Their method requires large amounts of parallel data and does not need the machine translation service during test phase. In our setting, there are not enough parallel data to train the model and it will lead to an unfair comparison without using the machine translated text. We implement a variant of their method which learns the 1408 vector representation for the training and test data using both the original and the translated texts. Each pair of parallel documents shares the same document representation. We also implement the method of (Zhou et al., 2015) which is originally designed for the English-Chinese cross-lingual sentiment classification task. We find that it is not very adaptable in our case because the negation pattern and sentiment words are hard to choose for our target languages. The results of our replication do not achieve comparable results with the rest methods and are not listed here to avoid misleading the readers. 4.4 Results and Analysis Table 2 shows the experimental results for all the baselines and our method. For all the nine tasks, our bilingual document representation learning method achieves the state-of-the-art performance. The two most simple approaches MT-BOW implemented by (Prettenhofer and Stein, 2010) and MTPV implemented by us are both strong baselines. They achieve comparable results with the more complex baselines on many tasks. MT-PV performs better than MT-BOW on most tasks which proves that the representation learning method is more useful than the traditional bag-of-words features. The three word-based representation learning methods CL-SCL, BSE and CR-RL achieve similar results with the simple model MT-BOW and only outperform it on some tasks. However, the document representation learning methods MTPV, Bi-PV and BiDRL performs much better. It shows that capturing the compositionality of words is important for sentiment classification. The isolated word representations are not enough to model the whole document. The Bi-PV model outperforms MT-PV on most tasks and shows that the authors idea of learning a single representation for a pair of parallel documents is more useful than learning them separately. For all the baselines and our method, the performance of the English-Japanese tasks is lower than that of the English-German and English-French tasks. It is reasonable because the English language is much closer to German and French than Japanese. The machine translation tool also performs better when translating between the Western languages. Our BiDRL model outperforms all the existing methods on all the tasks. The accuracy is over 80% on all the six tasks for the two European target languages. The mean accuracy of the nine tasks shows a significant gap between BiRDL and the existing models. It achieves an improvement about 3% compared to the previous state-of-theart methods. 4.5 Parameter Sensitivity Study In this subsection, we investigate the influence of the vector size of our representation learning algorithm. We conduct the experiments by changing the vector size from 50 to 400. For each parameter setting, we run the algorithm for ten times and get the mean accuracy. The results of MT-PV and BiDRL on all the nine tasks are shown in Figure 3. For almost all the tasks, we can observe that our model BiDRL steadily outperforms the strong baseline MT-PV. It proves the efficacy of our bilingual embedding constraints. For most of the nine tasks including DEDVD, DE-MUSIC, FR-DVD, FR-MUSIC and JPMUSIC, the performance of BiDRL increases with the growth of the vector size at the beginning and remains stable afterwards. For the rest tasks, our model responds less sensitively to the change of the vector size and the prediction accuracy keeps steady. However, the results of MT-PV show no regular patterns with the change of the vector size which makes it hard to choose a satisfying parameter value. The parameter K is empirically chosen as 10 because we find that its value has little influence to our model when it is chosen between 10 and 50. Selecting a small K will help to accelerate the training procedure. 4.6 Analysis of the Sentiment Information The traditional paragraph vector model only models the semantic relatedness between texts via the word co-occurrence statistics. In this study, we propose to learn the bilingual representation utilizing the sentiment information. Firstly, we introduce a classification hyperplane to separate the embedding of texts with different polarities, i.e the loss function L2. Secondly, we consider the texts with the same sentiment but has largely different textual expressions. They are forced the have similar representations, i.e. L4. Table 3 shows the 1409 Figure 3: Influence of vector size for the nine cross-lingual sentiment classification tasks results of our model without using these sentiment information. Model MT-PV BiDRL-L2 BiDRL-L4 BiDRL Accuracy 77.23 79.51 80.43 81.34 Table 3: Influence of the sentiment information. We only show the mean accuracy of the nine tasks due to space limit. MT-PV can be regarded as BiDRL without all the sentiment information. It achieves lower results than the other three methods. We can also observe that removing L2 or L4 both decreases the accuracy. It proves that the sentiment information helps BiDRL to achieve better results. 5 Conclusion and Future Work In this study, we propose a bilingual document representation learning method for cross-lingual sentiment classification. Different from previous studies which only get bilingual word embeddings, we directly learn the vector representation for documents in different languages. We propose three strategies to achieve a consistent embedding space for the source and target languages. Both sentiment and semantic correlations are exploited in our algorithm while previous works only use the semantic relatedness between parallel documents. Our model is evaluated on a benchmarking dataset which contains three different target languages and three different domains. Several stateof-the-art methods including several bilingual representation learning models are used for comparison. Our algorithm outperforms all the baseline methods on all the nine tasks in the experiment. Our future work will focus on extending the bilingual document representation model into the multilingual scenario. We will try to learn a single embedding space for a source language and multiple target languages simultaneously. In addition, we will also explore the possibility of using more complex neural network models such as convolutional neural network and recurrent neural network to build bilingual document representation system. Acknowledgments The work was supported by National Natural Science Foundation of China (61331011), National Hi-Tech Research and Development Pro1410 gram (863 Program) of China (2015AA015403, 2014AA015102) and IBM Global Faculty Award Program. We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. References AR Balamurali. 2012. Cross-lingual sentiment analysis for indian languages using linked wordnets. In Proceedings of 2012 International Conference on Computational Linguistics (COLING), pages 73–82. Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2010. Multilingual subjectivity: Are more languages better? In Proceedings of the 23rd international conference on computational linguistics, pages 28–36. Association for Computational Linguistics. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pages 92–100. ACM. Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science, 2(1):1–8. Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C. Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Advances in Neural Information Processing Systems, pages 1853–1861. Michael Gamon. 2004. Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis. In Proceedings of the 20th international conference on Computational Linguistics, page 841. Association for Computational Linguistics. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proceedings of The 32nd International Conference on Machine Learning, pages 748–756. Karl Moritz Hermann and Phil Blunsom. 2015. Multilingual models for compositional distributed semantics. In Proceedings of 52rd Annual Meeting of the Association for Computational Linguistic, pages 58–68. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING2012, pages 1459–1474. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188–1196. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1–167. Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K. Tsou. 2011. Joint bilingual sentiment classification with unlabeled parallel corpora. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 320–330. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151–159. Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Ge Xu, and Houfeng Wang. 2012. Cross-lingual mixture model for sentiment classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 572–581. Association for Computational Linguistics. Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proceedings of ACL2007, pages 976–983. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79–86. Association for Computational Linguistics. Hieu Pham, Minh-Thang Luong, and Christopher D. Manning. 2015. Learning distributed representations for multilingual text sequences. In Proceedings of NAACL-HLT, pages 88–94. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1118–1127. Association for Computational Linguistics. Jie Tang, Yuan Zhang, Jimeng Sun, Jinghai Rao, Wenjing Yu, Yiran Chen, and Alvis Cheuk M. Fong. 2012. Quantitative study of individual emotional states in social networks. Affective Computing, IEEE Transactions on, 3(2):132–144. Andranik Tumasjan, Timm Oliver Sprenger, Philipp G. Sandner, and Isabell M. Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. ICWSM, 10:178–185. 1411 Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1Volume 1, pages 235–243. Association for Computational Linguistics. Min Xiao and Yuhong Guo. 2013. Semi-supervised representation learning for cross-lingual text classification. In Proceedings of EMNLP-2013, pages 1465–1475. Association for Computational Linguistics. Min Xiao and Yuhong Guo. 2014. Semi-supervised matrix completion for cross-lingual text classification. In Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1607–1614. Huiwei Zhou, Long Chen, Fulin Shi, and Degen Huang. 2015. Learning bilingual sentiment word embeddings for cross-language sentiment classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 433–440. Association for Computational Linguistics. Will Y. Zou, Richard Socher, Daniel M. Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In EMNLP, pages 1393–1398. 1412
2016
133
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1413–1423, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Segment-Level Sequence Modeling using Gated Recursive Semi-Markov Conditional Random Fields Jingwei Zhuo1,2 ∗, Yong Cao2, Jun Zhu1 †, Bo Zhang1, Zaiqing Nie2 1Dept. of Comp. Sci. & Tech., State Key Lab of Intell. Tech. & Sys., TNList Lab, Tsinghua University, Beijing, 100084, China 2Microsoft Research, Beijing, 100084, China {zjw15@mails, dcszj@mail, dcszb@mail}.tsinghua.edu.cn; {yongc, znie}@microsoft.com Abstract Most of the sequence tagging tasks in natural language processing require to recognize segments with certain syntactic role or semantic meaning in a sentence. They are usually tackled with Conditional Random Fields (CRFs), which do indirect word-level modeling over word-level features and thus cannot make full use of segment-level information. Semi-Markov Conditional Random Fields (Semi-CRFs) model segments directly but extracting segment-level features for Semi-CRFs is still a very challenging problem. This paper presents Gated Recursive Semi-CRFs (grSemi-CRFs), which model segments directly and automatically learn segmentlevel features through a gated recursive convolutional neural network. Our experiments on text chunking and named entity recognition (NER) demonstrate that grSemi-CRFs generally outperform other neural models. 1 Introduction Most of the sequence tagging tasks in natural language processing (NLP) are segment-level tasks, such as text chunking and named entity recognition (NER), which require to recognize segments (i.e., a set of continuous words) with certain syntactic role or semantic meaning in a sentence. These tasks are usually tackled with Conditional Random Fields (CRFs) (Lafferty et al., 2001), which do word-level modeling as putting each word a tag, by using some predefined tagging schemes, e.g., the “IOB” scheme (Ramshaw ∗This work was done when J.W.Z was on an internship with Microsoft Research. † J.Z is the corresponding author. and Marcus, 1995). Such tagging schemes are lossy transformations of original segment tags: They do indicate the boundary of adjacent segments but lose the length information of segments to some extent. Besides, CRFs can only employ word-level features, which are either hand-crafted or extracted with deep neural networks, such as window-based neural networks (Collobert et al., 2011) and bidirectional Long Short-Term Memory networks (BI-LSTMs) (Huang et al., 2015). Therefore, CRFs cannot make full use of segmentlevel information, such as inner properties of segments, which cannot be fully encoded in wordlevel features. Semi-Markov Conditional Random Fields (Semi-CRFs) (Sarawagi and Cohen, 2004) are proposed to model segments directly and thus readily utilize segment-level features that encode useful segment information. Existing work has shown that Semi-CRFs outperform CRFs on segment-level tagging tasks such as sequence segmentation (Andrew, 2006), NER (Sarawagi and Cohen, 2004; Okanohara et al., 2006), web data extraction (Zhu et al., 2007) and opinion extraction (Yang and Cardie, 2012). However, Semi-CRFs need many more features compared to CRFs as they need to model segments with different lengths. As manually designing the features is tedious and often incomplete, how to automatically extract good features becomes a very important problem for Semi-CRFs. A naive solution that builds multiple feature extractors, each of which extracts features for segments with a specific length, is apparently time-consuming. Moreover, some of these separate extractors may underfit as the segments with specific length may be very rare in the training data. By far, SemiCRFs are lacking of an automatic segment-level feature extractor. In this paper, we fill the research void by 1413 proposing Gated Recursive Semi-Markov Conditional Random Fields (grSemi-CRFs), which can automatically learn features for segment-level sequence tagging tasks. Unlike previous approaches which usually use a neural-based feature extractor with a CRF layer, a grSemi-CRF consists of a gated recursive convolutional neural network (grConv) (Cho et al., 2014) with a Semi-CRF layer. The grConv is a variant of recursive neural networks. It builds a pyramid-like structure to extract segment-level features in a hierarchical way. This feature hierarchy well matches the intuition that long segments are combinations of their short sub-segments. This idea was first explored in Cho et al. (2014) to build an encoder in neural machine translation and then extended to solve other problems, such as sentence-level classification (Zhao et al., 2015) and Chinese word segmentation (Chen et al., 2015). The advantages of grSemi-CRFs are two folds. First, thanks to the pyramid architecture of grConvs, grSemi-CRFs can extract all the segmentlevel features using one single feature extractor, and there is no underfitting problem as all parameters of the feature extractor are shared globally. Besides, unlike recurrent neural network (RNN) models, the training and inference of grSemiCRFs are very fast as there is no time dependency and all the computations can be done in parallel. Second, thanks to the semi-Markov structure of Semi-CRFs, grSemi-CRFs can model segments in sentences directly without the need to introduce extra tagging schemes, which solves the problem that segment length information cannot be fully encoded in tags. Besides, grSemi-CRFs can also utilize segment-level features which can flexibly encode segment-level information such as inner properties of segments, compared to word-level features as used in CRFs. By combining grConvs with Semi-CRFs, we propose a new way to automatically extract segment-level features for SemiCRFs. Our major contributions can be summarized as: (1) We propose grSemi-CRFs, which solve both the automatic feature extraction problem for Semi-CRFs and the indirect word-level modeling problem in CRFs. As a result, grSemiCRFs can do segment-level modeling directly and make full use of segment-level features; (2) We evaluate grSemi-CRFs on two segmentlevel sequence tagging tasks, text chunking and NER. Experimental results show the effectiveness of our model. 2 Preliminary In sequence tagging tasks, given a word sequence, the goal is to assign each word (e.g., in POS Tagging) or each segment (e.g., in text chunking and NER) a tag. By leveraging a tagging scheme like “IOB”, all the tasks can be regarded as wordlevel tagging. More formally, let X denote the set of words and Y denote the set of tags. A word sentence with length T can be denoted by x = (x1, ..., xT ) and its corresponding tags can be denoted as y = (y1, ..., yT ). A CRF (Lafferty et al., 2001) defines a conditional distribution p(y|x) = 1 Z(x) exp T X t=1 F(yt, x) + A(yt−1, yt) ! , (1) where F(yt, x) is the tag score (or potential) for tag yt at position t, A(yt−1, yt) is the transition score between yt−1 and yt to measure the tag dependencies of adjacent words and Z(x) = P y′ exp PT t=1 F(y′ t, x) + A(yt−1, y′ t)  is the normalization factor. For the common log-linear models, F(yt, x) can be computed by F(yt, x) = vT ytf(yt, x) + byt, (2) where V = (v1, ..., v|Y|)T ∈R|Y|×D, bV = (b1, ..., b|Y|)T ∈R|Y|, f(yt, x) ∈RD are the word-level features for yt over the sentence x and D is the number of features. f(yt, x) can be manually designed or automatically extracted using neural networks, such as window-based neural networks (Collobert et al., 2011). If we consider segment-level tagging directly1, we get a segmentation of the previous tag sentence. With a little abuse of notation, we denote a segmentation by s = (s1, ..., s|s|) in which the jth segment sj = ⟨hj, dj, yj⟩consists of a start position hj, a length dj < L where L is a predefined upperbound and a tag yj. Conceptually, sj means a tag yj is given to words (xhj, ..., xhj+dj−1). A Semi-CRF (Sarawagi and Cohen, 2004) defines a conditional distribution p(s|x) = 1 Z(x) exp   |s| X j=1 F(sj, x) + A(yj−1, yj)  , (3) 1Word-level tagging can be regarded as segment-level tagging over length-1 segments. 1414 Figure 1: An overview of grSemi-CRFs. For simplicity, we set the segment length upperbound L = 4, and the sentence length T = 6. The left side is the feature extractor, in which each node denotes a vector of segment-level features (e.g., z(d) k for the kth node in the dth layer). Embeddings of word-level input features are used as length-1 segment-level features, and the length-d feature is extracted from two adjacent length-(d −1) features. The right side is the Semi-CRF. Tag score vectors are computed as linear transformations of segment-level features and the number of them equals the number of nodes in the same layer. For clarity, we use triangle, square, pentagon and hexagon to denote the tag score vectors for length-1, 2, 3, 4 segments and directed links to denote the tag transformations of adjacent segments. where F(sj, x) is the potential or tag score for segment sj, A(yj−1, yj) is the transition score to measure tag dependencies of adjacent segments and Z(x) = P s′ exp P|s′| j=1 F(s′ j, x) + Ay′ j−1,y′ j  is the normalization factor. For the common loglinear models, F(sj, x) can be computed by F(sj, x) = vT yjf(sj, x) + byj, (4) where V = (v1, ..., v|Y|)T ∈R|Y|×D, bV = (b1, ..., b|Y|)T ∈R|Y| and f(sj, x) ∈RD are the segment-level features for sj over the sentence x. As Eq. (1) and Eq. (3) show, CRFs can be regarded as a special case of Semi-CRFs when L = 1. CRFs need features for only length-1 segments (i.e., words), while Semi-CRFs need features for length-ℓsegments (1 ≤ℓ≤L). Therefore, to model the same sentence, Semi-CRFs generally need many more features than CRFs, especially when L is large. Besides, unlike word-level features used in CRFs, the sources of segment-level features are often quite limited. In existing work, the sources of f(sj, x) can be roughly divided into two parts: (1) Concatenations of word-level features (Sarawagi and Cohen, 2004; Okanohara et al., 2006); and (2) Hand-crafted segment-level features, including task-insensitive features, like the length of segments, and task-specific features, like the verb phrase patterns in opinion extraction (Yang and Cardie, 2012). As manually designing features is time-consuming and often hard to capture rich statistics underlying the data, how to automatically extract features for Semi-CRFs remains a challenge. 3 Gated Recursive Semi-Markov CRFs In this section, we present Gated Recursive SemiCRFs (grSemi-CRFs), which inherit the advantages of Semi-CRFs in segment-level modeling, and also solve the feature extraction problem of Semi-CRFs by introducing a gated recursive convolutional neural network (grConv) as the feature extractor. Instead of building multiple feature extractors at different scales of segment lengths, grSemi-CRFs can extract features with any length by using a single grConv, and learn the parameters effectively via sharing statistics. 3.1 Architecture The architecture of grSemi-CRFs is illustrated in Figure 1. A grSemi-CRF can be divided into two parts, a feature extractor (i.e., grConv) and a SemiCRF. Below, we explain each part in turn. As is shown, the feature extractor is a pyramidlike directed acyclic graph (DAG), in which nodes 1415 Figure 2: The the building block of the feature extractor (i.e., grConv), in which parameters are shared among the pyramid structure. We omit the dependency of θL, θR, θM on GL, GR. are stacked layer by layer and information is propagated from adjacent nodes in the same layer to their co-descendants in the higher layer through directed links. Recall that L denotes the upperbound of segment length (i.e., the height of the grConv), we regard the bottom level as the 1st level and the top level as the Lth level. Then, for a length-T sentence, the dth level will have T −d + 1 nodes, which correspond to features for T −d + 1 length-d segments. The kth node in the dth layer corresponds to the segment-level latent features z(d) k ∈RD, which denote the meaning of the segment, e.g., the syntactic role (i.e., for text chunking) or semantic meaning (i.e., for NER). Like CRFs, grSemi-CRFs allow word-level categorical inputs (i.e., xk) which are transformed into continuous vectors (i.e., embeddings) according to look-up tables and then used as length-1 segment-level features (i.e., z(1) k ). To be clear, we call these inputs as input features and those extracted segment-level features (i.e., z(d) k ) as segment-level latent features. Besides, grSemiCRFs also allow segment-level input features (e.g., gazetteers) directly as shown in Eq. (12). We will discuss more details in section 4.3. The building block of the feature extractor is shown in Figure 2, where an intermediate node ˆz(d) ∈RD is introduced to represent the interactions of two length-(d −1) segments. To capture such complex interactions, ˆz(d) k is computed through a non-linear transformation, i.e., ˆz(d) k = g(α(d) k ) = g(WLz(d−1) k + WRz(d−1) k+1 + bW), (5) where WL, WR ∈RD×D and bW ∈RD are shared globally, and g(·) is a non-linear activation function2. 2We use a modified version of the sigmoid function, i.e., g(x) = 4  1 1+e−x −1 2  . Then, the length-d segment-level latent features z(d) k can be computed as z(d) k = θLz(d−1) k + θRz(d−1) k+1 + θMˆz(d) k , (6) where θL, θM and θR ∈R are the gating coefficients which satisfy the condition θL, θR, θM ≥0 and θL + θR + θM = 1. Here, we make a little modification of grConvs by making the gating coefficients as vectors instead of scalars, i.e., z(d) k = θL ◦z(d−1) k + θR ◦z(d−1) k+1 + θM ◦ˆz(d) k , (7) where ◦denotes the element-wise product and θL, θR and θM ∈ RD are vectorial gating coefficients3 which satisfy the condition that θL,i, θR,i, θM,i ≥0 and θL,i + θR,i + θM,i = 1 for 1 ≤i ≤D. There are two reasons for this modification: (1) Theoretically, the element-wise combination makes a detailed modeling as each feature in z(d) k may have its own combining; and (2) Experimentally, this setting makes our grSemi-CRF4 more flexible, which increases its generalizability and leads to better performance in experiments as shown in Table 4. We can regard Eq. (7) as a soft gate function to control the propagation flows. Besides, all the parameters (i.e., WL, WR, bW, GL, GR, bG) are shared globally and recursively applied to the input sentence in a bottom-up manner. All of these account for the name gated recursive convolutional neural networks (grConvs). Eq. (5) and Eq. (7) build the information propagation criteria in a grConv. The basic assumption behind Eq. (5) and Eq. (7) is that the meaning of one segment can be represented as a linear combination of three parts: (1) the meaning of its prefix segment, (2) the meaning of its suffix segment and (3) the joint meaning of both (i.e., the complex interaction). This process matches our intuition about the hierarchical structure in the composition of a sentence. For example, the meaning of the United States depends on the suffix segment United States, whose meaning is not only from its prefix United or suffix States, but the interaction of both. The vectorial gating coefficients θL, θR and θM 3We omit the dependency of θL, θR and θM on d and k for notation simplicity. 4Unless otherwise stated, we regard “grSemi-CRF” as grSemi-CRF with vectorial gating coefficients in default. 1416 are computed adaptively, i.e.,    θL θR θM   =    1/Z 1/Z 1/Z   ◦exp  GLz(d−1) k + GRz(d−1) k+1 + bG  , (8) where GL, GR ∈R3D×D and bG ∈R3D are shared globally. Z ∈Rd is normalization coefficients and the ith element of Z is computed via Zi = 3 X j=1 h exp  GLz(d−1) k + GRz(d−1) k+1 + bG i D×(j−1)+i . (9) After the forward propagation of the feature extractor is over, the tag scores (i.e., the potential functions for Semi-CRFs) are computed through a linear transformation. For segment sj = ⟨hj, dj, yj⟩, its latent feature is f(sj, x) = z(dj) hj and corresponding potential/tag score is F(sj; x) = f(⟨hj, dj, yj⟩; x) = h V (dj) 0 z (dj) hj + b (dj) V i yj , (10) where V(dj) 0 ∈R|Y|×D and b(dj) ∈R|Y| are parameters for length-dj segments. To encode contextual information, we can assume that the tag of a segment depends not only on itself but also its neighbouring segments with the same length, i.e., F(sj; x) = " H X i=−H V (dj) i z (dj) hj+i + b (dj) V # yj , (11) where V(dj) −H , ..., V(dj) 0 , ..., V(dj) H ∈R|Y|×D and H is the window width for neighbouring segments. Apart from the automatically extracted segmentlevel latent features z(d) k , grSemi-CRFs also allow segment-level input features (e.g., gazetteers), i.e., F(sj; x) = " H X i=−H V (dj) i z (dj) hj+i + b (dj) V + U(dj)c (dj) hj # yj , (12) where U(dj) ∈R|Y|×D′ and c(dj) hj ∈RD′ is a vector of segment-level input features. Then, we can use Eq. (3) for inference by using a Semi-CRF version of Viterbi algorithms (Sarawagi and Cohen, 2004). 3.2 Learning of Parameters To learn grSemi-CRFs, we maximize the log likelihood L = log p(s|x) over all the parameters. Here, for notation simplity, we consider the simpliest case, i.e., using Eq. (10) to compute tag scores. More details can be found in the supplementary note. Gradients of Semi-CRF-based parameters (i.e., A and V0) and tag scores F(sj, x) can be computed based on the marginal probability of neighbouring segments via a Semi-CRF version of forward-backward algorithms (Sarawagi and Cohen, 2004). As for the grConv-based parameters, we can compute their gradients by back propagation. For example, gradients for WL and GL are5 ∂L ∂WL = L X d=1 T −d+1 X k=1 ∂L ∂z(d) k ∂z(d) k ∂WL , ∂L ∂GL = L X d=1 T −d+1 X k=1 ∂L ∂z(d) k ∂z(d) k ∂GL , (13) where ∂z(d) k ∂WL and ∂z(d) k ∂GL can be derived from Eq. (5), Eq. (7) and Eq. (8). For ∂L ∂z(d) k , thanks to the recursive structure, it can be computed as ∂L ∂z(d) k =∂z(d+1) k ∂z(d) k ∂L ∂z(d+1) k + ∂z(d+1) k−1 ∂z(d) k ∂L ∂z(d+1) k−1 + V(d) 0 T ∂L ∂F(s(d) k , x) , (14) where s(d) k = ⟨k, d, Y⟩is a length-|Y| vector which denotes segments with all possible tags for z(d) k , ∂L ∂F(s(d) k ,x) is the gradient for F(s(d) k , x) and ∂z(d+1) k ∂z(d) k = diag(θL) + diag(θM ◦g′(α(d+1) k ))WL, (15) where diag(θL) denotes the diagonal matrix spanned by vector θL, and ∂z(d+1) k−1 ∂z(d) k has a similar form. As Eq. (14) shows, for each node in the feature extractor of grSemi-CRFs, its gradient consists of two parts: (1) the gradients back propagated from high layer nodes (i.e., longer segments); and (2) the supervising signals from SemiCRFs. In other words, the supervision in the objective function is added to each node in grSemiCRFs. This is a nice property compared to other neural-based feature extractors used in CRFs, in which only the nodes of several layers on the top receive supervision. Besides, the term diag(θL) in Eq. (15) prevents ∂z(d+1) k ∂z(d) k from being too small when g′(α(d) k ) and WL are small, which acts as the linear unit recurrent connection in the memory block of LSTM (Hochreiter and Schmidhuber, 1997; Zhao et al., 2015). All of these help in avoiding gradient vanishing problems in training grSemi-CRFs. 5Gradients for WR, bW, GR, bG can be computed in similar ways. 1417 4 Experiments We evaluate grSemi-CRFs on two segment-level sequence tagging NLP tasks: text chunking and named entity recognition (NER). 4.1 Datasets For text chunking, we use the CONLL 2000 text chunking shared dataset6 (Tjong Kim Sang and Buchholz, 2000), in which the objective is to divide the whole sentence into different segments according to their syntactic roles, such as noun phrases (“NP”), verb phrases (“VP”) and adjective phrases (“ADJP”). We call it a “segment-rich” tasks as the number of phrases are much higher than that of non-phrases which is tagged with others (“O”). We evaluate performance over all the chunks instead of only noun pharse (NP) chunks. For NER, we use the CONLL 2003 named entity recognition shared dataset7 (Tjong Kim Sang and De Meulder, 2003), in which segments are tagged with one of four entity types: person (“PER”), location (“LOC”), organization (“ORG”) and miscellaneous(“MISC”), or others (“O”) which is used to denote non-entities. We call it a “segment-sparse” task as entities are rare while non-entities are common. 4.2 Input Features For each word, we use multiple input features, including the word itself, its length-3 prefix and length-4 suffix, its capitalization pattern, its POS tag, the length-4,8,12,20 prefixs of its Brown clusters (Brown et al., 1992) and gazetteers8. All of them are used as word-level input features except gazetteers, which are used as segment-level features directly. All the embeddings for word-level inputs are randomly initialized except word embeddings, which can be initialized randomly or by pretraining over unlabeled data, which is external information compared to the dataset. Besides word embeddings, Brown clusters and gazetteers are also based on external information, as summarized below: • Word embeddings. We use Senna embeddings9 (Collobert et al., 2011), which are 50dimensional and have been commonly used 6Available at: http://www.cnts.ua.ac.be/conll2000/chunking/ 7Available at: http://www.cnts.ua.ac.be/conll2003/ner/ 8Among them, POS tags are provided in the dataset. 9Available at http://ronan.collobert.com/senna/ in sequence tagging tasks (Collobert et al., 2011; Turian et al., 2010; Huang et al., 2015); • Brown clusters. We train two types of Brown clusters using the implementation from Liang (2005): (1) We follow the setups of Ratinov and Roth (2009), Turian et al. (2010) and Collobert et al. (2011) to generate 1000 Brown clusters on Reuters RCV1 dataset (Lewis et al., 2004); (2) We generate 1000 Brown clusters on New York Times (NYT) corpus (Sandhaus, 2008); • Gazetteers. We build our gazetteers based on the gazetteers used in Senna (Collobert et al., 2011) and Wikipedia entries, mainly the locations and organizations. We also denoise our gazetteers by removing overlapped entities and using BBN Pronoun Coreference and Entity Type Corpus (Weischedel and Brunstein, 2005) as filters10. 4.3 Implementation Details To learn grSemi-CRFs, we employ Adagrad (Duchi et al., 2011), an adaptive stochastic gradient descent method which has been proved successful in similar tasks (Chen et al., 2015; Zhao et al., 2015). To avoid overfitting, we use the dropout strategy (Srivastava et al., 2014) and apply it on the first layer (i.e., z(0) k ). We also use the strategy of ensemble classifiers, which is proved an effective way to improve generalization performance (Collobert et al., 2011). All results are obtained by decoding over an average Semi-CRF after 10 training runs with randomly initialized parameters. For the CONLL 2003 dataset, we use the F1 scores on the development set to help choose the best-performed model in each run. For the CONLL 2000 dataset, as there is no development set provided, we use cross validation as Turian et al. (2010) to choose hyperparameters. After that, we retrain model according to the hyperparameters and choose the final model in each run. Our hyperparameter settings for these two tasks are shown in Table 1. The segment length is set according to the maximum segment length in training set. We set the minibatch size to 10, which means that we process 10 sentences in a batch. The window width defines the parameter H in Eq. (12) when producing tag score vectors. 10We apply gazetteers on BBN corpus, collect lists of false positive entities and clean our gazetteers according to these lists. 1418 Hyperparameters CONLL 2000 CONLL 2003 Segment length 15 10 Dropout 0.3 0.3 Learning rate 0.3 0.3 Epochs 15 20 Minibatches 10 10 Window width 2 2 Table 1: Hyperparameter settings for our model. 4.4 Results and Analysis Table 2 shows the results of our grSemi-CRFs and other models11. We divide other models into two categories, i.e., neural models and non-neural models, according to whether neural networks are used as automatic feature extractors. For neural models, Senna (Collobert et al., 2011) consists of a window-based neural network for feature extraction and a CRF for word-level modeling while BILSTM-CRF (Huang et al., 2015) uses a bidirectional Long Short-Term Memory network for feature extraction and a CRF for word-level modeling. For non-neural models, JESS-CM (Suzuki and Isozaki, 2008) is a semi-supervised model which combines Hidden Markov Models (HMMs) with CRFs and uses 1 billion unlabelled words in training. Lin and Wu (2009) cluster 20 million phrases over corpus with around 700 billion tokens, and use the resulting clusters as features in CRFs. Passos et al. (2014) propose a novel word embedding method which incorporates gazetteers as supervising signals in pretraining and builds a loglinear CRF over them. Ratinov and Roth (2009) use CRFs based on many non-local features and 30 gazetteers extracted from Wikipedia and other websites with more than 1.5 million entities. As Table 2 shows, grSemi-CRFs outperform other neural models, in both text chunking and named entity recognition (NER) tasks. BI-LSTMCRFs use many more input features than ours, which accounts for the phenomenon that the performance of our grSemi-CRFs is rather mediocre (i.e., 93.92% versus 94.13% and 84.66% versus 84.26%) without external information. However, once using Senna embeddings, our grSemi-CRFs perform much better than BI-LSTM-CRFs. For non-neural models, one similarity of them is that they use a lot of hand-crafted features, and many of them are even task-specific. Unlike them, 11Because of the space limit, we only compare our model with other models which follow similar settings and achieve high performance. Input Features CONLL 2000 CONLL 2003 None 93.92 84.66 Brown(NYT) 94.18 86.57 Brown(RCV1) 94.05 88.22 Emb 94.73 88.12 Gaz – 87.94 Emb + Brown(NYT) 95.01 88.86 Emb + Brown(RCV1) 94.87 89.44 Emb + Gaz – 89.88 Brown(NYT) + Gaz – 88.69 Brown(RCV1) + Gaz – 89.82 All(NYT) – 90.00 All(RCV1) – 90.87 Table 3: Results of grSemi-CRF with external information, measured in F1 score. None = no external information, Emb = Senna embeddings, Brown = Brown clusters, Gaz = gazetteers and All = Emb + Brown + Gaz. NYT and RCV1 in the parenthesis denote the corpus used to generate Brown clusters. “–” means no results. Notice that gazetteers are only applied to NER. grSemi-CRFs use much fewer input features and most of them are task-insensitive13. However, grSemi-CRFs achieve almost the same performance, sometimes even better. For text chunking, grSemi-CRF outperforms all reported supervised models, except JESS-CM (Suzuki and Isozaki, 2008), a semi-supervised model using giga-word scale unlabeled data in training14. However, the performance of our grSemi-CRF (95.01%) is very close to that of JESS-CM (95.15%). For NER, the performance of grSemi-CRFs are also very closed to state-of-the-art results (90.87% versus 90.90%). 4.4.1 Impact of External Information As Table 3 shows, external information improve the performance of grSemi-CRFs for both tasks. Compared to text chunking, we can find out that external information plays an extremely important role in NER, which coincides with the general idea that NER is a knowledge-intensive task (Ratinov and Roth, 2009). Another interesting thing is that, Brown clusters generated from NYT corpus performs better on the CONLL 2000 task while those generated from Reuters RCV1 dataset performs better on the CONLL 2003 task. The reason is 13E.g.: for NER, JESS-CM uses 79 different features; Lin and Wu (2009) use 48 baseline and phrase cluster features; while we only use 11. Besides, grSemi-CRFs use almost the same features for chunking and NER (except gazetteers). 14Being semi-supervised, JESS-CM can learn from interactions between labelled and unlabelled data during training but the training is slow compared to supervised models. 1419 Models CONLL 2000 CONLL 2003 Ours grSemi-CRF (Random embeddings) 93.92 84.66 grSemi-CRF (Senna embeddings) 95.01 89.44 (90.87) Neural Models Senna (Random embeddings) 90.33 81.47 Senna (Senna embeddings) 94.32 88.67 (89.59) BI-LSTM-CRF (Random) 94.13 84.26 BI-LSTM-CRF (Senna embeddings) 94.46 88.83 (90.10) Non-Neural Models JESS-CM (Suzuki and Isozaki, 2008), 15M 94.67 89.36 JESS-CM (Suzuki and Isozaki, 2008), 1B 95.15 89.92 Ratinov and Roth (2009)12 – 90.57 Lin and Wu (2009) – 90.90 Passos et al. (2014) – 90.90 Table 2: Experimental results over the CONLL-2000 and CONLL-2003 shared datasets, measured in F1 score. Numbers in parentheses are the F1 score when using gazetteers. JESS-CM (Suzuki and Isozaki, 2008) is a semi-supervised model, in which 15M or 1B denotes the number of unlabeled words it uses for training. Gating Coefficients CONLL 2000 CONLL 2003 Scalars 94.47 89.27(90.54) Vectors 95.01 89.44(90.87) Table 4: F1 scores of grSemi-CRF with scalar or vectorial gating coefficients. Numbers in parentheses are the F1 score when using gazetteers. that the CONLL 2000 dataset is the subset of Wall Street Journal (WSJ) part of the Penn Treebank II Corpus (Marcus et al., 1993) while the CONLL 2003 dataset is a subset of Reuters RCV1 dataset. Maybe the writing styles between NYT and WSJ are more similar than those between RCV1 and WSJ. 4.4.2 Impact of Vectorial Gating Coefficients As Table 4 shows, a grSemi-CRF using vectorial gating coefficients (i.e., Eq. (7)) performs better than that using scalar gating coefficients (i.e., Eq. (6)), which provides evidences for the theoretical intuition that vectorial gating coefficients can make a detailed modeling of the combinations of segment-level latent features and thus performs better than scalar gating coefficients. 4.4.3 Visualization of Learnt Segment-Level Features To demonstrate the quality of learnt segmentlevel features, we use an indirect way as widely adopted in previous work, e.g., Collobert et al. (2011). More specifically, we show 10 nearest neighbours for some selected queried segments according to Euclidean metric of corresponding features15. To fully demonstrate the power of grSemi-CRF in learning segment-level features, we use the Emb+Brown(RCV1) model in Table 3, which uses no gazetteers. We train the model on the CONLL 2003 training set and find nearest neighbours in the CONLL 2003 test set. We make no restrictions on segments, i.e., all possible segments with different lengths in the CONLL 2003 test set are candidates. As Table 5 shown, most of the nearest segments are meaningful and semantically related. For example, the nearest segments for “Filippo Inzaghi” are not only tagged with person, but also names of famous football players as “Filippo Inzaghi”. There also exist some imperfect results. E.g., for “Central African Republic”, nearest segments, which contain the same queried segment, are semantically related but not syntactically similar. The major reason may be that the CONLL 2003 dataset is a small corpus (if compared to the vast unlabelled data used to train Senna embeddings), which restricts the range for candidate segments and the quality of learnt segment-level features. Another reason is that labels in the CONLL 2003 dataset mainly encodes semantic information (e.g., named entities) instead of syntactic information (e.g., chunks). Besides, as we make no restriction on the formulation of candidate segments, sometimes only a part of the whole phrase will be retrieved, e.g., “FC Hansa”, which is the prefix of “FC Hansa Rostock”. Exploring better way of utilizing unla15Using the cosine similarity generates similar results. However, as Collobert et al. (2011) use Euclidean metric, we follow their settings. 1420 Queried Filippo Inzaghi AC Milan Central African Republic Asian Cup Segments Pierluigi Casiraghi FC Hansa From Central African Republic Scottish Cup Fabrizio Ravanelli SC Freiburg Southeast Asian Nations European Cup Bogdan Stelea FC Cologne In Central African Republic African Cup Nearest Francesco Totti Aston Villa The Central African Republic World Cup Neighbour Predrag Mijatovic Red Cross South African Breweries UEFA Cup Results Fausto Pizzi Yasuto Honda Of Southeast Asian Nations Europoean Cup Pierre Laigle NAC Breda New South Wales Asian Games Pavel Nedved La Plagne Central African Republic . Europa Cup Anghel Iordanescu Sporting Gijon Papua New Guinea National League Zeljko Petrovic NEC Nijmegen Central Africa F.A. Cup Table 5: Visualization of segment-level features learnt on the CONLL 2003 dataset. For each column the queried segment is followed by its 10 nearest neighbors (measured by the cosine similarity of their feature vectors). Corresponding tags for these four queried segments are (from left to right): person, organization, location and miscellaneous. belled data to improve learning segment-level features is part of the future work. 5 Discussions and Related Work Cho et al. (2014) first propose grConvs to learn fix-length representations of the whole source sentence in neural machine translation. Zhao et al. (2015) use grConvs to learn hierarchical representations (i.e., multiple fix-length representations) of the whole sentence for sentence-level classification problem. Both of them focus on sentencelevel classification problems while grSemi-CRFs are solving segment-level classification (sequence tagging) problems, which is fine-grained. Chen et al. (2015) propose Gated Recursive Neural Networks (GRNNs), a variant of grConvs, to solve Chinese word segmentation problem. GRNNs still do word-level modeling by using CRFs while grSemi-CRFs do segment-level modeling directly by using semi-CRFs and makes full use of the recursive structure of grConvs. We believe that, the recursive neural network (e.g., grConv) is a natural feature extractor for Semi-CRFs, as it extracts features for every possible segments by one propagation over one trained model, which is fast-computing and efficient. In this sense, grSemi-CRFs provide a promising direction to explore. 6 Conclusions In this paper, we propose Gated Recursive SemiMarkov Conditional Random Fields (grSemiCRFs) for segment-level sequence tagging tasks. Unlike word-level models such as CRFs, grSemiCRFs model segments directly without the need of using extra tagging schemes and also readily utilize segment-level features, both hand-crafted and automatically extracted by a grConv. Experimental evaluations demonstrate the effectiveness of grSemi-CRFs on both text chunking and NER tasks. In future work, we are interested in exploring better ways of utilizing vast unlabelled data to improve grSemi-CRFs, e.g., to learn phrase embeddings from unlabelled data or designing a semisupervised version of grSemi-CRFs. Acknowledgments J.W.Z, J.Z and B.Z are supported by the National Basic Research Program (973 Program) of China (No. 2013CB329403), National NSF of China (Nos. 61322308, 61332007), the Youngth Topnotch Talent Support Program, Tsinghua TNList Lab Big Data Initiative, and Tsinghua Initiative Scientific Research Program (No. 20141080934). References Galen Andrew. 2006. A hybrid markov/semi-markov conditional random field for sequence segmentation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 465–472. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467–479. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015. Gated recursive neural network for chinese word segmentation. In Proceedings of Annual Meeting of the Association for Computational Linguistics. 1421 Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, page 103. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282–289. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361–397. Percy Liang. 2005. Semi-Supervised Learning for Natural Language. Ph.D. thesis, Massachusetts Institute of Technology. Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 1030–1038. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Daisuke Okanohara, Yusuke Miyao, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2006. Improving the scalability of semi-markov conditional random fields for named entity recognition. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 465–472. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 78–86. Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the Third Workshop on Very Large Corpora. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 147– 155. Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6. Sunita Sarawagi and William W Cohen. 2004. Semimarkov conditional random fields for information extraction. In Advances in Neural Information Processing Systems, pages 1185–1192. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using gigaword scale unlabeled data. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 665–673. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: Chunking. In Proceedings of the Forth Conference on Computational Natural Language Learning, pages 127–132. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Computational Natural Language Learning, pages 142–147. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 384–394. Ralph Weischedel and Ada Brunstein. 2005. Bbn pronoun coreference and entity type corpus. Linguistic Data Consortium, Philadelphia, 112. Bishan Yang and Claire Cardie. 2012. Extracting opinion expressions with semi-markov conditional random fields. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1335–1345. Association for Computational Linguistics. Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-adaptive hierarchical sentence model. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 4069–4076. 1422 Jun Zhu, Zaiqing Nie, Ji-Rong Wen, Bo Zhang, and Hsiao-Wuen Hon. 2007. Webpage understanding: an integrated approach. In Proceedings of SIGKDD Conference on Knowledge Discovery and Data Mining. 1423
2016
134
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1424–1433, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Identifying Causal Relations Using Parallel Wikipedia Articles Christopher Hidey Department of Computer Science Columbia University New York, NY 10027 [email protected] Kathleen McKeown Department of Computer Science Columbia University New York, NY 10027 [email protected] Abstract The automatic detection of causal relationships in text is important for natural language understanding. This task has proven to be difficult, however, due to the need for world knowledge and inference. We focus on a sub-task of this problem where an open class set of linguistic markers can provide clues towards understanding causality. Unlike the explicit markers, a closed class, these markers vary significantly in their linguistic forms. We leverage parallel Wikipedia corpora to identify new markers that are variations on known causal phrases, creating a training set via distant supervision. We also train a causal classifier using features from the open class markers and semantic features providing contextual information. The results show that our features provide an 11.05 point absolute increase over the baseline on the task of identifying causality in text. 1 Introduction The automatic detection of causal relationships in text is an important but difficult problem. The identification of causality is useful for the understanding and description of events. Causal inference may also aid upstream applications such as question answering and text summarization. Knowledge of causal relationships can improve performance in question answering for “why” questions. Summarization of event descriptions can be improved by selecting causally motivated sentences. However, causality is frequently expressed implicitly, which requires world knowledge and inference. Even when causality is explicit, there is a wide variety in how it is expressed. Causality is one type of relation in the Penn Discourse Tree Bank (PDTB) (Prasad et al, 2008). In general, discourse relations indicate how two text spans are logically connected. In PDTB theory, these discourse relations can be marked explicitly or conveyed implicitly. In the PDTB, there are 102 known explicit discourse markers such as “and”, “but”, “after”, “in contrast”, or “in addition”. Of these, 28 explicitly mark causal relations (e.g., “because”, “as a result”, “consequently”). In addition to explicit markers, PDTB researchers recognize the existence of an open class of markers, which they call AltLex. There is a tremendous amount of variation in how AltLexes are expressed and so the set of AltLexes is arguably infinite in size. In the PDTB, non-causal AltLexes include “That compares with” and “In any event.” Causal AltLexes include “This may help explain why” and “This activity produced.” Discourse relations with explicit discourse markers can be identified with high precision (Pitler and Nenkova, 2009) but they are also relatively rare. Implicit relations are much more common but very difficult to identify. AltLexes fall in the middle; their linguistic variety makes them difficult to identify but their presence improves the identification of causality. One issue with causality identification is the lack of data. Unsupervised identification on open domain data yields low precision (Do et al, 2011) and while supervised methods on the PDTB have improved (Ji and Eisenstein, 2015), creating enough labeled data is difficult. Here, we present a distant supervision method for causality identification that uses parallel data to identify new causal connectives given a seed set. We train a classifier on this data and self-train to obtain new data. Our novel approach uses AltLexes that were automatically identified using semi-supervised learning over a parallel corpus. Since we do not know 1424 a priori what these phrases are, we used a monolingual parallel corpus to identify new phrases that are aligned with known causal connectives. As large corpora of this type are rare, we used Simple and English Wikipedia to create one. Section 2 discusses prior research in causality and discourse. Section 4 describes how we created a new corpus from Wikipedia for causality and extracted a subset of relations with AltLexes. In section 5, we recount the semantic and marker features and how they were incorporated into a classifier for causality. We show that these features improve causal inference by an 11.05 point increase in F-measure over a naive baseline in 6. Finally, we discuss the results and future work in 7. 2 Related Work Recent work on causality involved a combination of supervised discourse classification with unsupervised metrics such as PMI (Do et al, 2011). They used a minimally supervised approach using integer linear programming to infer causality. Other work focused on specific causal constructions events paired by verb/verb and verb/noun (Riaz and Girju, 2013) (Riaz and Girju, 2014). Their work considered semantic properties of nouns and verbs as well as text-only features. There has also been significant research into discourse semantics over the past few years. One theory of discourse structure is represented in the PDTB (Prasad et al, 2008). The PDTB represents discourse relationships as connectives between two arguments. Early work with the PDTB (Pitler and Nenkova, 2009) showed that discourse classes with explicit discourse connectives can be identified with high accuracy using a combination of the connective and syntactic features. Further work (Pitler et al, 2009) resulted in the identification of implicit discourse relations using word pair features; this approach extended earlier work using word pairs to identify rhetorical relations (Marcu, 2001) (Blair-Goldensohn et al, 2007). These word pairs were created from text by taking the cross product of words from the Gigaword corpus for explicit causal and contrast relations. Others built on this work by aggregating word pairs for every explicit discourse connective (Biran and McKeown, 2013). They then used the cosine similarity between a prospective relation and these word pairs as a feature. Recently, the first end-to-end discourse parser was completed (Lin et al, 2012). This parser jointly infers both argument spans and relations. The current state-of-the-art discourse relation classifier is a constituent parse recursive neural network with coreference (Ji and Eisenstein, 2015). Our work is similar to previous work to identify discourse connectives using unsupervised methods (Laali, 2014). In their research, they used the EuroParl parallel corpus to find discourse connectives in French using known English connectives and filtering connectives using patterns. Unlike this effort, we created our own parallel corpus and we determined new English connectives. Compared to previous work on causality, we focus specifically on causality and the AltLex. The work by Do and Riaz used minimally supervised (Do et al, 2011) or unsupervised (Riaz and Girju, 2013) approaches and a slightly different definition of causality, similar to co-ocurrence. The work of Riaz and Girju (2013) is most similar to our own. We also examine causality as expressed by the author of the text. However, they focus on intra-sentence constructions between noun or verb phrases directly whereas we attempt to examine how the AltLex connectives express causality in context. Lastly, Riaz and Girju used FrameNet and WordNet to identify training instances for causal verb-verb and verb-noun pairs (Riaz and Girju, 2014) whereas we use them as features for an annotated training set. Overall our contributions are a new dataset created using a distant supervision approach and new features for causality identification. One major advantage is that our method requires very little prior knowledge about the data and requires only a small seed set of known connectives. 3 Linguistic Background One disadvantage of the PDTB is that the marked AltLexes are limited only to discourse relations across sentences. We know that there are additional phrases that indicate causality within sentences but these phrases are neither found in the set of Explicit connectives nor AltLexes. Thus we expand our definition of AltLex to include these markers when they occur within a sentence. Although some phrases or words could be identified by consulting a thesaurus or the Penn Paraphrase Database (Ganitkevitch et al, 2013), we still need the context of the phrase to identify causality. We hypothesize that there is significant linguis1425 tic variety in causal AltLexes. In the set of known explicit connectives there are adjectives (“subsequent”), adverbs (“consequently”), and prepositions and prepositional phrases (“as a result”). We consider that these parts of speech and syntactic classes can be found in AltLexes as well. In addition, verbs and nouns often indicate causality but are not considered explicit connectives. Some obvious cases of AltLexes are the verbal forms of connectives such as “cause” and “result”. In addition to these verbs, there exist other verbs that can occur in causal contexts but are ambiguous. Consider that “make” and “force” can replace “cause” in this context: The explosion made people evacuate the building. The explosion forced people to evacuate the building. The explosion caused people to evacuate the building. However, the words can not be substituted in the following sentence: The baker made a cake. *The baker caused a cake. *The baker forced a cake. Furthermore, verbs such as “given” may replace additional causal markers: It’s not surprising he is tired since he did not get any sleep. It’s not surprising he is tired given that he did not get any sleep. There are also some phrases with the same structure as partial prepositional phrases like “as a result” or “as a result of”, where the pattern is preposition and noun phrase followed by an optional preposition. Some examples of these phrases include “on the basis of,” “with the goal of,” and “with the idea of.” We may also see phrases that are only causal when ending in a preposition such as “thanks to” or “owing to.” “Lead” may only be causal as a part of “lead to” and the same for “develop” versus “develop from.” In addition, prepositions can affect the direction of the causality. Comparing “resulting in” versus “resulting from”, the preposition determines that the latter is of the “reason” class and the former is of the “result” class. Ultimately, we want to be able to detect these phrases automatically and determine whether they are a large/small and open/closed class of markers. 4 Data In order to discover new causal connectives, we can leverage existing information about known causal connectives. It should be the case that if a phrase is a causal AltLex, it will occur in some context as a replacement for at least one known explicit connective. Thus, given a large dataset, we would expect to find some pairs of sentences where the words are very similar except for the connective. This approach requires a parallel corpus to identify new AltLexes. As large English paraphrase corpora are rare, we draw from previous work identifying paraphrase pairs in Wikipedia (Hwang et al, 2015). The dataset we used was created from the English and Simple Wikipedias from September 11, 2015. We used the software WikiExtractor to convert the XML into plain text. All articles with the same title were paired and any extra articles were ignored. Each article was lemmatized, parsed (both constituent and dependency), and named-entity tagged using the Stanford CoreNLP suite (Manning et al, 2014). We wish to identify paraphrase pairs where one element is in English Wikipedia and one is in Simple Wikipedia. Furthermore, we do not limit these elements to be single sentences because an AltLex can occur within a sentence or across sentences. Previous work (Hwang et al, 2015) created a score for similarity (WikNet) between English Wikipedia and Simple Wikipedia. Many similarity scores are of the following form comparing sentences W and W ′: s(W, W ′) = 1 Z X w∈W max w′∈W ′ σ(w, w′)idf(w) (1) where σ(w, w′) is a score1 between 2 words and Z is a normalizer ensuring the score is between 0 and 1. For their work, they created a score where σ(w, w′) = σwk(w, w′) + σwk(h, h′)σr(r, r′). σwk is a distance function derived from Wiktionary by creating a graph based on words appearing in a definition. h and h′ are the governors of w and w′ in a dependency parse and r and r′ are the relation. Similar sentences should have similar structure and the governors of two words in different sentences should also be similar. σr is 0.5 if h and h′ have the same relation and 0 otherwise. For this work, we also include partial matches, as we only need the connective and the immediate 1The score is not a metric, as it is not symmetric. 1426 Method Max F1 WikNet 0.4850 WikNet, λ = 0.75 0.5981 Doc2Vec 0.6226 Combined 0.6263 Table 1: Paraphrase Results surrounding context on both sides. If one sentence contains an additional clause, it does not affect whether it contains a connective. Thus, one disadvantage to this score is that when determining whether a sentence is a partial match to a longer sentence or a shorter sentence, the longer sentence will often be higher as there is no penalty for unmatched words between the two elements. We experimented with penalizing content words that do not match any element in the other sentence. The modified score, where W and W ′ are nouns, verbs, adjectives, or adverbs, is then: s(W, W ′) = 1 Z X w∈W max w′∈W ′ σ(w, w′)idf(w) −λ(|W ′ −W| + |W −W ′|) (2) We also compared results with a model trained using doc2vec (Le and Mikolov, 2014) on each sentence and sentence pair and identifying paraphrases with their cosine similarity. As these methods are unsupervised, only a small amount of annotated data is needed to tune the similarity thresholds. Two graduate computer science students annotated a total of 45 Simple/English article pairs. There are 3,891 total sentences in the English articles and 794 total sentences in the Simple Wikipedia articles. Interannotator agreement (IAA) was 0.9626, computed on five of the article pairs using Cohen’s Kappa. We tune the threshold for each possible score: for doc2vec the cosine similarity and for WikNet the scoring function. We also tune the lambda penalty for WikNet. F1 scores were calculated via grid search over these parameters and the best settings are a combined score using doc2vec and penalized WikNet with λ = 0.75 where a pair is considered to be a paraphrase if either threshold is greater than 0.69 or 0.65 respectively. Using the combined score we obtain 187,590 paraphrase pairs. After combining and deduping this dataset with the publicly available dataset released by (Hwang et al, 2015), we obtain 265,627 pairs, about 6 times as large as the PDTB. In order to use this dataset for training a model to distinguish between causal and non-causal inClass Type Subtype Temporal Contingency Cause reason result Pragmatic cause Condition Pragmatic condition Comparison Expansion Table 2: PDTB Discourse Classes stances, we use the paired data to identify pairs where an explicit connective appears in at least one element of the pair. The explicit connective can appear in a Simple Wikipedia sentence or an English Wikipedia sentence. We then use patterns to find new phrases that align with these connectives in the matching sentence. To identify a set of seed words that unambiguously identify causal and non-causal phrases we examine the PDTB. As seen in Table 2, causal relations fall under the Contingency class and Cause type. We consider connectives from the PDTB that either only or never appear as that type. The connective “because” is the only connective to be almost always a “reason” connective, whereas there are 11 unambiguous connectives for “result”, including “accordingly”, “as a consequence”, “as a result”, and “thus”. There were many markers that were unambiguously not causal (e.g. “but”, “though”, “still”, “in addition”). In order to label paraphrase data, we use constraints to identify possible AltLexes.2 We used Moses (Koehn et al, 2007) to train an alignment model on the created paraphrase dataset. Then for every paraphrase pair we identify any connectives that match with any potential AltLexes. Based on our linguistic analysis, we require these phrases to contain at least one content word, which we identify based on part of speech. We also draw on previous work (Pitler and Nenkova, 2009) that used the left and right sibling of a phrase. Therefore, we use the following rules to label new AltLexes: 1. Must be less than 7 words. 2. Must contain at least one content word: (a) A non-proper noun (b) A non-modal and non-auxiliary verb (c) An adjective or adverb 3. Left sibling of the connective must be a noun phrase, verb phrase, or sentence. 4. Right sibling of the connective must be a noun phrase, verb phrase, or sentence. 2We do not attempt to label arguments at this point. 1427 5. May not contain a modal or auxilary verb. Because connectives identify causality between events or agents, we require that each potential connective link 2 events/agents. We define an event or agent as a noun, verb, or an entire sentence. This means that we require the left sibling of the first word in a phrase and the right sibling of the last word in a phrase to be an event, where a sibling is the node at the same level in the constituent parse. We also require the left and right sibling rule for the explicit connectives, but we allow additional non-content words (for example, we would mark “because of” as a connective rather than “because.” We then mark the AltLex as causal or not causal. Given that the paraphrases and word alignments are noisy, we use the syntactic rules to decrease the amount of noise in the data by more precisely determining phrase boundaries. These rules are the same features used by Pitler and Nenkova (2009) for the early work on the PDTB on explicit connectives. These features were successful on the Wall Street Journal and they are applicable for other corpora as well. Also, they are highly indicative of discourse/non-discourse usage so we believe that we are improving on noisy alignments without losing valuable data. In the future, however, we would certainly like to move away from encoding these constraints using a rule-based method and use a machine learning approach to automatically induce rules. This method yields 72,135 non-causal and 9,190 causal training examples. Although these examples are noisy, the dataset is larger than the PDTB and was derived automatically. There are 35,136 argument pairs in the PDTB marked with one of the 3 relations that implies a discourse connective (Implicit, Explicit, and AltLex), and of these 6,289 are causal. Of the 6,289 causal pairs, 2,099 are explicit and 273 contain an AltLex. 5 Methods Given training data labeled by this distant supervision technique, we can now treat this problem as a supervised learning problem and create a classifier to identify causality. We consider two classes of features: features derived from the parallel corpus data and lexical semantic features. The parallel corpus features are created based on where AltLexes are used as paraphrases for causal indicators and in what context. The lexical semantic features use FrameNet, WordNet, and VerbNet to derive features from all the text in the sentence pair. These lexical resources exploit different perspectives on the data in complementary ways. The parallel corpus features encourage the classifier to select examples with AltLexes that are likely to be causal whereas the lexical semantic features allow the classifier to consider context for disambiguation. In addition to the dataset, the parallel corpus and lexical semantic features are the main contributions of this effort. 5.1 Parallel Corpus Features We create a subclass of features from the parallel corpus: a KL-divergence score to encourage the identification of phrases that replace causal connectives. Consider the following datapoints and assume that they are aligned in the parallel corpus: I was late because of traffic. I was late due to traffic. We want both of these examples to have a high score for causality because they are interchangeable causal phrases. Similarly, we want non-causal phrases that are often aligned to have a high score for non-causality. We define several distributions in order to determine whether an AltLex is likely to replace a known causal or non-causal connective. We consider all aligned phrases, not just ones containing a causal or non-causal connective to attempt to reduce noisy matches. The idea is that nonconnective paraphrases will occur often and in other contexts. The following conditional Bernoulli distributions are calculated for every aligned phrase in the dataset, where w is the phrase, s is the sentence it occurs in, c is “causal” and nc is “not causal”: p1 = p(w1 ∈s1|rel(s1) ∈{c}, w1 /∈s2) (3) p2 = p(w1 ∈s1|rel(s1) ∈{nc}, w1 /∈s2) (4) We compare these two distributions to other distributions with the same word and in a different context (where o represents “other”): q1 = p(w1 ∈s1|rel(s1) ∈{nc, o}, w1 /∈s2) (5) q2 = p(w1 ∈s1|rel(s1) ∈{c, o}, w1 /∈s2) (6) We then calculate DKL(p1||q1) and DKL(p2||q2). In order to use KL-divergence as a feature, we multiply the score by (−1)p<q and add a feature for causal and one for noncausal. 1428 5.2 Lexical Semantic Features As events are composed of predicates and arguments and these are usually formed by nouns and verbs, we consider using lexical semantic resources that have defined hierarchies for nouns and verbs. We thus use the lexical resources FrameNet, WordNet, and VerbNet as complementary resources from which to derive features. We hypothesize that these semantic features provide context not present in the text; from these we are able to infer causal and anti-causal properties. FrameNet is a resource for frame semantics, defining how objects and relations interact, and provides an annotated corpus of English sentences. WordNet provides a hierarchy of word senses and we show that the top-level class of verbs is useful for indicating causality. VerbNet provides a more fine-grained approach to verb categorization that complements the views provided by FrameNet and WordNet. In FrameNet, a semantic frame is a conceptual construction describing events or relations and their participants (Ruppenhofer et al, 2010). Frame semantics abstracts away from specific utterances and ordering of words in order to represent events at a higher level. There are over 1,200 semantic frames in FrameNet and some of these can be used as evidence or counter-evidence for causality (Riaz and Girju, 2013). In Riaz’s work, they identified 18 frames as causal (e.g. “Purpose”, “Internal cause”, “Reason”, “Trigger”). We use these same frames to create a lexical score based on the FrameNet 1.5 corpus. This corpus contains 170,000 sentences manually annotated with frames. We used a part-of-speech tagged version of the FrameNet corpus and for each word and tag, we count how often it occurs in the span of one of the given frames. We only considered nouns, verbs, adjectives, and adverbs. We then calculate pw(c|t) and cwct, the probability that a word w is causal given its tag t and its count, respectively. The lexical score of a word i is calculated by using the assigned part-of-speech tag and is given by CSi = pwi(c|ti) log cwicti. The total score of a sequence of words is then Pn i=0 CSi. We also took this further and determined what frames are likely to be anti-causal. We started with a small set of seed words derived directly from 11 discourse classes (types and subtypes from Table 2), such as “Compare”, “Contrast”, “Explain”, “Concede”, and “List”. We expanded this list using WordNet synonyms for the seed words. We then extracted every frame associated with their stems in the stemmed FrameNet corpus. These derived frames were manually examined to develop a list of 48 anti-causal frames, including “Statement”, “Occasion”, “Relative time”, “Evidence”, and “Explaining the facts”. We create an anti-causal score using the FrameNet corpus just as we did for the causal score. The total anti-causal score of a sequence of words is Pn i=0 ACSi where ACSi = pwi(a|ti) log cwiati for anti-causal probabilities and counts. We split each example into three parts: the text before the AltLex, the AltLex, and the text after. Each section is given a causal score and an anti-causal score. Overall, there are six features derived using FrameNet: causal score and anticausal score for each part of the example. In WordNet, words are grouped into “synsets,” which represent all synonyms of a particular word sense. Each word sense in the WordNet hierarchy has a top-level category based on part of speech (Miller, 1995). Every word sense tagged as noun, verb, adjective, or adverb is categorized. Some examples of categories are “change”, “stative”, or “communication”. We only include the top level because of the polysemous nature of WordNet synsets. We theorize that words having to do with change or state should be causal indicators and words for communication or emotion may be anti-causal indicators. Similar to the FrameNet features, we split the example into three sections. However, we also consider the dependency parse of the data. We believe that causal relations are between events and agents which are represented by nouns and verbs. Events can also be represented by predicates and their arguments, which is captured by the dependency parse. As the root of a dependency parse is often a verb and sometimes a noun or adjective, we consider the category of the root of a dependency parse and its arguments. We include a categorical feature indicating the top-level category of the root of each of the three sections, including the AltLex. For both sides of the AltLex, we include the top-level category of all arguments as well. If a noun has no category, we mark it using its named-entity tag. If there is still no tag, we mark the category as “none.” VerbNet VerbNet is a resource devoted to storing information for verbs (Kipper et al, 2000). 1429 In contrast to WordNet, VerbNet provides a more fine-grained description of events while focusing less on polysemy. Some examples of VerbNet classes are “force”, “indicate”, and “wish”. In VerbNet, there are 273 verb classes, and we include their presence as a categorical feature. Similar to WordNet, we use VerbNet categories for three sections of the sentence: the text pre-AltLex, the AltLex, and the text post-AltLex. Unlike WordNet, we only mark the verbs in the AltLex, root, or arguments. Interaction Finally, we consider interactions between the WordNet and VerbNet features. As previous work (Marcu, 2001) (Biran and McKeown, 2013) used word pairs successfully, we hypothesize that pairs of higher-level categories will improve classification without being penalized as heavily by the sparsity of dealing with individual words. Thus we include interaction features between every categorical feature for the pre-AltLex text and every feature for the post-AltLex text. In all, we include the following features (L refers to the AltLex, B refers to the text before the AltLex and A refers to the text after the AltLex): 1. FrameNet causal score for L, B, and A. 2. FrameNet anti-causal score for L, B, and A. 3. WordNet top-level of L. 4. WordNet top-level of the root of B and A. 5. WordNet top-level for arguments of B and A. 6. VerbNet category for verb at the root of L. 7. VerbNet top-level category for any verb in the root of B and A. 8. VerbNet top-level category for any verbs in the arguments of B and A. 9. Categorical interaction features between the features from B and the features from A. 6 Results We evaluated our methods on two manually annotated test sets. We used one of these test sets for development only. For this set, one graduate computer science student and two students from the English department annotated a set of Wikipedia articles by marking any phrases they considered to indicate a causal relationship and marking the phrase as “reason” or “result.” Wikipedia articles from the following categories were chosen as we believe they are more likely to contain causal relationships: science, medicine, disasters, history, television, and film. For each article in this category, both the English and Simple Wikipedia articles were annotated. A total of 12 article pairs were annotated. IAA was computed to be 0.31 on two article pairs using Kripendorff’s alpha. IAA was very low and we also noticed that annotators seemed to miss sentences containing causal connectives. It is easy for an annotator to overlook a causal relation when reading through a large quantity of text. Thus, we created a new task that required labeling a connective as causal or not when provided with the sentence containing the connective. For testing, we used CrowdFlower to annotate the output of the system using this method. We created a balanced test set by annotating 600 examples, where the system labeled 300 as causal and 300 as non-causal. Contributors were limited to the highest level of quality and from English-speaking countries. We required 7 annotators for each data point. The IAA was computed on the qualification task that all annotators were required to complete. There were 15 questions on this task and 410 annotators. On this simplified task, the IAA improved to 0.69. We also considered evaluating the results on the PDTB but encountered several issues. As the PDTB only has a limited set of explicit intrasentence connectives marked, this would not show the full strength of our method. Many causal connectives that we discovered are not annotated in the PDTB. Alternatively, we considered evaluating on the AltLexes in the PDTB but these examples are only limited to inter-sentence cases, whereas the vast majority of our automatically annotated training data was for the intra-sentence case. Thus we concluded that any evaluation on the PDTB would require additional annotation. Our goal in this work was to identify new ways in which causality is expressed, unlike the PDTB where annotators were given a list of connectives and asked to determine discourse relations. We tested our hypothesis by training a binary3 classifier on our data using the full set of features we just described. We used a linear Support Vector Machine (SVM) classifier (Vapnik, 1998) trained using stochastic gradient descent (SGD) through the sci-kit learn package. (Pedregosa et al, 2011)4 We used elasticnet to encourage sparsity and tuned the regularization constant α through grid search. We use two baselines. The first baseline is the 3We combine “reason” and “result” into one “causal” class and plan to work on distinguishing between non-causal, reason, and result in the future. 4We also considered a logistic regression classifier. 1430 Accuracy True Precision True Recall True F-measure Most Common Class 63.50 60.32 82.96 69.85 CONN 62.21 78.47 35.64 49.02 LS 67.68 61.98 58.51 60.19 KLD 58.03 91.17 19.55 32.20 LS ∪KLD 73.95 80.63 64.35 71.57 LS ∪LSinter 72.99 78.54 64.66 70.93 KLD ∪LS ∪LSinter 70.09 76.95 58.99 66.78 LS ∪KLD ∪CONN 71.86 70.28 77.60 73.76 Bootstrapping1 79.26 77.97 82.64 80.24 Bootstrapping2 79.58 77.29 84.85 80.90 Table 3: Experimental Results most common class of each AltLex according to its class in the initial training set. For example, “caused by” is almost always a causal AltLex. A second baseline uses the AltLex itself as a categorical feature and is shown as CONN in Table 3. For comparison, this is the same baseline used in (Pitler and Nenkova, 2009) on the explicit discourse relations in the PDTB. We compare these two baselines to ablated versions of our system. We evaluate on the KLD (KLD) and semantic (LS and LSinter) features described in sections 5 and 5.1. LS consists of features 1-8, all the FrameNet, VerbNet, and WordNet features. LSinter includes only the interaction between categorical features from WordNet and VerbNet. We calculate accuracy and true precision, recall, and F-measure for the causal class. As seen in Table 3, the best system (LS ∪KLD ∪CONN) outperforms the baselines.5 The lexical semantic features by themselves (LS) are similar to those used by (Riaz and Girju, 2014) although on a different task and with the WordNet and VerbNet features included. Note that the addition of the Altlex words and KL divergence (LS ∪KLD∪CONN) yields an absolute increase in f-measure of 13.57 points over lexical semantic features alone. 6.1 Bootstrapping Our method for labeling AltLexes lends itself naturally to a bootstrapping approach. As we are using explicit connectives to identify new AltLexes, we can also use these new AltLexes to identify additional ones. We then consider any paraphrase pairs where at least one of the phrases contains one of our newly discovered AltLexes. We also use 5These results are statistically significant by a binomial test with p < 7 ∗10−6. our classifier to automatically label these new data points and remove any phrases where the classifier did not agree on both elements in the pair. The set of features used were the KLD∪LS∪LSinter features as these performed best on the development set. We use early stopping on the development data to identify the point when adding additional data is not worthwhile. The bootstrapping method converges quickly. After 2 iterations we see a decrease in the F-measure of the development data. The increase in performance on the test data is significant. In Table 3, Bootstrappingn refers to results after n rounds of bootstrapping. Bootstrapping yields improvement over the supervised method with an absolute gain of 7.14 points. 6.2 Discussion Of note is that the systems without connectives (combinations of LS, LSinter, and KLD) perform well on the development set without using any lexical features. Using this system enables the discovery of new AltLexes during bootstrapping, as we cannot rely on having a closed class of connectives but need a way of classifying connectives not seen in the initial training set. Also important is that the Altlex by itself (CONN) performs poorly. In comparison, in the task of identifying discourse relations in the PDTB these features yield an 75.33 F-score and 85.85% accuracy in distinguishing between discourse and non-discourse usage (Pitler and Nenkova, 2009) and an accuracy of 93.67% when distinguishing between discourse classes. Although this is a different data set, this shows that identifying causality when there is an open class of connectives is much more difficult. We believe the connective by itself performs poorly because of the wide 1431 True Precision True Recall True F-measure FrameNet 67.88 53.14 59.61 WordNet 76.92 9.52 16.94 V erbNet 38.70 3.80 6.92 Table 4: Semantic Feature Ablation linguistic variation in these alternative lexicalizations. Many connectives appear only once or not at all in the training set, so the additional features are required to improve performance. In addition, the “most common class” baseline is a strong baseline. The strength of this performance provides some indication of the quality of the training data, as the majority of the time the connective is very indicative of its class in the held-out test data. However, the the overall accuracy is still much lower than if we use informative features. The KLD and LS feature sets appear to be complementary. The KLD feature sets have higher precision on a smaller section of the data, whereas the LS system has higher recall overall. These lexical semantic features likely have higher recall because these resources are designed to represent classes of words rather than individual words. Some connectives occur very rarely, so it is necessary to generalize the key aspects of the connectives and class-based resources provide this capability. In order to determine the contribution of each lexical resource, we perform additional feature ablation for each of FrameNet, WordNet, and VerbNet. As seen in Table 4, the lexical semantic resources each contribute uniquely to the classifier. The FrameNet features provide most of the performance of the classifier. The WordNet and VerbNet features, though not strong individually, supply complementary information and improve the overall performance of the LS system (see Table 3) compared to just using FrameNet alone. Finally, the model (LS ∪KLD ∪CONN) correctly identifies some causal relations that neither baseline identifies, such as: Language is reduced to simple phrases or even single words, eventually leading to complete loss of speech. Kulap quickly accelerated north, prompting the PAGASA to issue their final advisory on the system. These examples do not contain standard causal connectives and occur infrequently in the data, so the lexical semantic features help to identify them. After two rounds of bootstrapping, the system is able to recover additional examples that were not found previously, such as: When he finally changed back, Buffy stabbed him in order to once again save the world. This connective occurs rarely or not at all in the initial training data and is only recovered because of the improvements in the model. 7 Conclusion We have shown a method for identifying and classifying phrases that indicate causality. Our method for automatically building a training set for causality is a new contribution. We have shown statistically significant improvement over the naive baseline using semantic and parallel corpus features. The text in the AltLex alone is not sufficient to accurately identify causality. We show that our features are informative by themselves and perform well even on rarely occurring examples. Ultimately, the focus of this work is to improve detection of causal relations. Thus, we did not evaluate some intermediate steps, such as the quality of the automatically annotated corpus. Our use of distant supervision demonstrates that we can use a large amount of possibly noisy data to develop an accurate classifer. To evaluate on the intermediate step would have required an additional annotation process. In the future, we may improve this step using a machine learning approach. Although we have focused exclusively on Wikipedia, these methods could be adapted to other domains and languages. Causality is not easily expressed in English using a fixed set of phrases, so we would expect these methods to apply to formal and informal text ranging from news and journals to social media. Linguistic expressions of causality in other languages is another avenue for future research, and it would be interesting to note if other languages have the same variety of expression. 1432 References Or Biran and Kathleen McKeown. 2013. Aggregated Word Pair Features for Implicit Discourse Relation Disambiguation. Proceedings of ACL. Sasha Blair-Goldensohn, Kathleen McKeown, and Owen Rambow. 2007. Building and Refining Rhetorical-Semantic Relation Models. Proceedings of NAACL-HLT. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised Learning of Narrative Event Chains. Stanford University. Stanford, CA 94305. Quang Xuan Do, Yee Seng Chan, and Dan Roth. 2011. Minimally Supervised Event Causality Identification. Transactions of ACL, 3:329344. William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning Sentences from Standard Wikipedia to Simple Wikipedia. Proceedings of NAACL-HLT. Yangfeng Ji and Jacob Eisenstein. 2015. One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations. Proceedings of EMNLP. Karin Kipper, Hoa Trang Dan, and Martha Palmer. 2000. Class-Based Construction of a Verb Lexicon. American Association for Artifical Intelligence. Majid Laali and Leila Kosseim. 2014. Inducing Discourse Connectives from Parallel Texts. Proceedings of COLING: Technical Papers. Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. Proceedings of ICML. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2012. A PDTB-Styled End-to-End Discourse Parser. Department of Computer Science, National University of Singapore. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. Proceedings of ACL: System Demonstrations. Daniel Marcu and Abdessamad Echihabi 2001. An Unsupervised Approach to Recognizing Discourse Relations. Proceedings of ACL. George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM. The PDTB Research Group 2008. The PDTB 2.0. Annotation Manual. Technical Report IRCS-08-01. Institute for Research in Cognitive Science, University of Pennsylvania. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text Proceedings of ACL. Emily Pitler and Ani Nenkova. 2009. Using Syntax to Disambiguate Explicit Discourse Connectives in Text. Proceedings of ACL-IJCNLP Short Papers. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. Proceedings of LREC. Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2010. Realization of Discourse Relations by Other Means: Alternative Lexicalizations. Proceedings of COLING. Kira Radinsky and Eric Horvitz. 2013. Mining the Web to Predict Future Events. Proceedings of WSDM. Mehwish Riaz and Roxana Girju. 2013. Toward a Better Understanding of Causality between Verbal Events: Extraction and Analysis of the Causal Power of Verb-Verb Associations. Proceedings of SIGDIAL. Mehwish Riaz and Roxana Girju. 2014. Recognizing Causality in Verb-Noun Pairs via Noun and Verb Semantics. Proceedings of the EACL 2014 Workshop on Computational Approaches to Causality in Language. Josef Ruppenhofer, Michael Ellsworth, Miriam R. L. Petruck, Christopher R. Johnson, and Jan Scheffczyk. 2010. FrameNet II: Extended Theory and Practice. University of California, Berkeley. Vladimir N. Vapnik. 1998. Statistical Learning Theory. Wiley-Interscience. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. Proceedings of NAACL-HLT. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research Volume 12. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. Proceedings of ACL: Interactive Poster and Demonstration Sessions. 1433
2016
135
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1434–1444, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text Kristina Toutanova Microsoft Research Redmond, WA, USA Xi Victoria Lin∗ Computer Science & Engineering University of Washington Wen-tau Yih Microsoft Research Redmond, WA, USA Hoifung Poon Microsoft Research Redmond, WA, USA Chris Quirk Microsoft Research Redmond, WA, USA Abstract Modeling relation paths has offered significant gains in embedding models for knowledge base (KB) completion. However, enumerating paths between two entities is very expensive, and existing approaches typically resort to approximation with a sampled subset. This problem is particularly acute when text is jointly modeled with KB relations and used to provide direct evidence for facts mentioned in it. In this paper, we propose the first exact dynamic programming algorithm which enables efficient incorporation of all relation paths of bounded length, while modeling both relation types and intermediate nodes in the compositional path representations. We conduct a theoretical analysis of the efficiency gain from the approach. Experiments on two datasets show that it addresses representational limitations in prior approaches and improves accuracy in KB completion. 1 Introduction Intelligent applications benefit from structured knowledge about the entities and relations in their domains. For example, large-scale knowledge bases (KB), such as Freebase (Bollacker et al., 2008) or DBPedia (Auer et al., 2007), have proven to be important resources for supporting open-domain question answering (Berant et al., 2013; Sun et al., 2015; Yih et al., 2015). In biomedicine, KBs such as the Pathway Interaction Database (NCI-PID) (Schaefer et al., 2009) are crucial for understanding complex diseases such as cancer and for advancing precision medicine. ∗This research was conducted during the author’s internship at Microsoft Research. While these knowledge bases are often carefully curated, they are far from complete. In non-static domains, new facts become true or are discovered at a fast pace, making the manual expansion of knowledge bases impractical. Extracting relations from a text corpus (Mintz et al., 2009; Surdeanu et al., 2012; Poon et al., 2015) or inferring facts from the relationships among known entities (Lao and Cohen, 2010) are thus important approaches for populating existing knowledge bases. Originally proposed as an alternative statistical relational learning method, the knowledge base embedding approach has gained a significant amount of attention, due to its simple prediction time computation and strong empirical performance (Nickel et al., 2011; Chang et al., 2014). In this framework, entities and relations in a knowledge base are represented in a continuous space, such as vectors and matrices. Whether two entities have a previously unknown relationship can be predicted by simple functions of their corresponding vectors or matrices. Early work in this direction focuses on exploring various kinds of learning objectives and frameworks, but the model is learned solely from known direct relationships between two entities (e.g., father(barack, sasha)) (Nickel et al., 2011; Socher et al., 2013; Bordes et al., 2013; Chang et al., 2014; Yang et al., 2015). In contrast, using multi-step relation paths (e.g., husband(barack, michelle) ∧ mother(michelle, sasha) to train KB embeddings has been proposed very recently (Guu et al., 2015; Garcia-Duran et al., 2015; Lin et al., 2015; Neelakantan et al., 2015). While using relation paths improves model performance, it also poses a critical technical challenge. As the number of possible relation paths between pairs of entities grows exponentially with path length, the training complexity increases sharply. Consequently, existing methods need 1434 to make approximations by sampling or pruning. The problem is worsened when the input is augmented with unlabeled text, which has been shown to improve performance (Lao et al., 2012; Gardner et al., 2013; Riedel et al., 2013; Gardner et al., 2014; Toutanova and Chen, 2015). Moreover, none of the prior methods distinguish relation paths that differ in the intermediate nodes they pass through (e.g., michelle in our example); all represent paths as a sequence of relation types. In this work, we aim to develop a KB completion model that can incorporate relation paths efficiently. We start from analyzing the procedures in existing approaches, focusing on their time and space complexity. Based on the observation that compositional representations of relation paths are in fact decomposable, we propose a novel dynamic programming method that enables efficient modeling of all possible relation paths, while also representing both relation types and nodes on the paths. We evaluated our approach on two datasets. The first is from the domain of gene regulatory networks. Apart from its obvious significance in biomedicine, it offers an excellent testbed for learning joint embedding of KBs and text, as it features existing knowledge bases such as NCIPID and an even larger body of text that grows rapidly (over one million new articles per year). By modeling intermediate nodes on relation paths, we improve the model by 3 points in mean average precision compared to previous work, while also providing a more efficient algorithm. The second dataset is based on a network derived from WordNet and previously used in work on knowledge base completion. On that dataset we demonstrate the ability of the model to effectively handle longer relation paths composed of a larger set of knowledge base relation types, with smaller positive impact of modeling intermediate nodes. 2 Preliminaries In this section, we first give a brief overview of the knowledge base and text representation used in this work. We then describe the task of knowledge base completion more formally and introduce our basic model setting and training objective. Knowledge Base A knowledge base (KB) is represented as a collection of subject-predicateobject triples (s, r, t), where s and t are the subject and object entities from a set E, and r is the predicate from a set R that denotes a relationFigure 1: A snapshot depicting the knowledge graph of a gene regulatory network, augmented with gene family and dependency path relations. ship between s and t. For example, in a KB of movie facts, we may find a triple (Han Solo, character, Star Wars), indicating that “Han Solo is a character in the Star Wars movie.” Let (s, π, t) = (s, r1, e1, r2, e2 . . . , en−1, rn, t) be a path in G with s/t as the start/end entities, r1, . . . , rn as the relation edges and e1, . . . , en−1 as the intermediate entities. In the domain of gene regulations, entities are genes and directed edges represent regulations. At the abstract level, there are two key relations, positive reg and negative reg, which signify that the subject gene increases or decreases the activity of the object gene, respectively. Figure 1 shows a snapshot of a gene regulatory network. Genes such as GRB2 and MAPK3 are denoted by the light grey nodes. Regulations such as positive reg are denoted by long edges pointing from subject to object. In addition, some genes stem from a common evolutionary origin and share similar functions. They form a “gene family”, such as the MAPK family that includes MAPK1 and MAPK3. To jointly embed text, we use sentences containing co-occurring gene pairs, such as “GRB2 was involved in the activation of gene MAPK3...”, and augment the above knowledge graph with dependency paths between the gene mentions, following the general approach of Riedel et al. (2013). Task and Scoring Models The KB completion task is to predict the existence of links (s, r, t) that are not seen in the training knowledge base. More specifically, we focus on ranking object and subject entities for given queries (s, r, ∗) and (∗, r, t). The basic KB embedding models learn latent vector/matrix representations for entities and relations, and score each possible triple using learned parameters θ and a scoring function f(s, r, t|θ). 1435 Below, we describe two basic model variations which we build on in the rest of the paper. BILINEAR The BILINEAR model learns a square matrix Wr ∈Rd×d for each relation r ∈R and a vector xe ∈Rd for each entity e ∈E. The scoring function of a relation triple (s, r, t) is defined as: f(s, r, t|θ) = x⊤ s Wrxt. (1) For a knowledge graph G with |E| = Ne and |R| = Nr, the parameter size of the BILINEAR model is O d2 . The large number of parameters make it prone to overfitting, which motivates its diagonal approximation, BILINEAR-DIAG. BILINEAR-DIAG The BILINEAR-DIAG model restricts the relation representations to the class of diagonal matrices. 1 In this case, the parameter size is reduced to O d  . Although we present model variants in terms of the BILINEAR representation, all experiments in this work are based on the BILINEAR-DIAG special case. All models are trained using the same loss function, which maximizes the probability of correct subject/object fillers of a given set of triples with relations or relation paths. The probability is defined by a log-linear model normalized over a set of negative samples: P(t|s, π; θ) = ef(s,π,t|θ) P t′∈Neg(s,π,∗)∪{t} ef(s,π,t′|θ) . The probability of subject entities is defined analogously. Define the loss for a given training triple L(s, π, t|θ) = −log P(t|s, π; θ) − log P(s|t, π; θ). The overall loss function is the sum of losses over all triples, with an L2 regularization term. L(θ) = X i L(si, πi, ti; θ) + λ∥θ∥2 (2) 3 Relation-path-aware Models We first review two existing methods using vector space relation paths modeling for KB completion in §3.1. We then introduce our new algorithm that can efficiently take into account all relation paths between two nodes as features and simultaneously model intermediate nodes on relation 1The downside of this approximation is that it enforces symmetry in every relation, i.e., f(s, r, t) = (xs ◦xt)⊤wr = (xt ◦xs)⊤wr = f(t, r, s). However, empirically it has been shown to outperform the Bilinear model when the number of training examples is small (Yang et al., 2015). paths in §3.2. We present a detailed theoretical comparison of the efficiency of these three types of methods in §3.3. 3.1 Prior Approaches The two approaches we consider here are: using relation paths to generate new auxiliary triples for training (Guu et al., 2015) and using relation paths as features for scoring (Lin et al., 2015). Both approaches take into account embeddings of relation paths between entities, and both of them used vector space compositions to combine the embeddings of individual relation links ri into an embedding of the path π. The intermediate nodes ei are neglected. The natural composition function of a BILINEAR model is matrix multiplication (Guu et al., 2015). For this model, the embedding of a length-n path Φπ ∈Rd×d is defined as the matrix product of the sequence of relation matrices for the relations in π. Φπ = Wr1 . . . Wrn. (3) For the BILINEAR-DIAG model, all the matrices are diagonal and the computation reduces to coordinate-wise product of vectors in Rd. 3.1.1 Relation Paths as a Compositional Regularizer In Guu et al. (2015), information from relation paths was used to generate additional auxiliary terms in training, which serve to provide a compositional regularizer for the learned node and relation embeddings. A more limited version of the same method was simultaneously proposed in Garcia-Duran et al. (2015). The method works as follows: starting from each node in the knowledge base, it samples m random walks of length 2 to a maximum length L, resulting in a list of samples {[si, πi, ti]}. si and ti are the start and end nodes of the random walk, respectively, and πi consists of a sequence of intermediate edges and nodes. Each of these samples is used to define a new triple used in the training loss function (eq. 2). The score of each triple under a BILINEAR composition model is defined as f(si, πi, ti|θ) = xt siΦπixti, where Φπi is the product of matrices for relation link types in the path (eq. 3). 3.1.2 PRUNED-PATHS: Relation Paths as Compositional Features Instead of using relation paths to augment the set of training triples, Lin et al. (2015) proposed to 1436 use paths (s, π, t) to define the scoring function f(s, r, t|θ, Πs,t). Here Πs,t denotes the sum of the embeddings of a set of paths π between the two nodes in the graph, weighted by path-constrained random walk probabilities. Their implementation built on the TransE (Bordes et al., 2013) embedding model; in comparison, we formulate a similar model using the BILINEAR model. We refer to such an approach as the PRUNEDPATHS model, for it discards paths with weights below a certain threshold. We define the model under a BILINEAR composition as follows: Let {π1, π2, . . . , πK} denote a fixed set of path types that can be used as a source of features for the model. We abuse notation slightly to refer to a sequence of types of relation links r1, r2, . . . , rn in a path in G as π. We denote by P(t|s, π) the path-constrained random walk probability of reaching node t starting from node s and following sequences of relation types as specified in π. We define the weighted path representation F(s, t) for a node pair as the weighted sum of the representations of paths π in the set {π1, π2, . . . , πK} that have non-zero path-constrained random walk probabilities. The representation is defined as: F(s, t) = P π w|π|P(t|s, π)Φ(π), where Φ(π) is the path relation type representation from (eq. 3). The weights w|π| provide a shared parameter for paths of each length, so that the model may learn to trust the contribution of paths of different lengths differentially. The dependence on θ and the set of paths Πs,t between the two nodes in the graph was dropped for simplicity. Figure 2 illustrates the weighted path representation between nodes GRB2 and MAPK3 from Figure 1. 2 Using the above definition, the score of a candidate triple f(s, r, t|θ, Πs,t) is defined as: f(s, r, t) = x⊤ s Wrxt + vec(F(s, t))⊤vec(Wr) (4) The first term of the scoring function is the same as that of the BILINEAR model, and the second term takes into account the similarity of the weighted path representations for (s, t) and the predicted relation r. Here we use element-wise product of the two matrices as the similarity metric. Training and scoring under this model requires explicit constructions of paths, and computing and storing the 2Unlike this illustration, the representations of dependency paths are not decomposed into representations of individual edges in our implementation. Figure 2: An illustration of the weighted sum of path representations for paths connecting GRB2 and MAPK3 from Figure 1.The prefix “ ” of a relation type indicates an inverse relation. The path-constrained random walk probabilities for P1, P2 and P3 are 0.3, 0.4 and 0.05 respectively, and 0.0 for the rest. The path length weights are omitted. random walk probabilities P(t|s, π), which makes it expensive to scale. In the next section, we introduce an algorithm that can efficiently take into account all paths connecting two nodes, and naturally extend it to model the impact of intermediate nodes on the informativeness of the paths. 3.2 ALL-PATHS: A Novel Representation and Algorithm We now introduce a novel algorithm for efficiently computing and learning the scoring function from (eq. 4), while summing over the set of all paths π up to a certain length L, and additionally modeling the impact of intermediate nodes on the representations Φ(π) of paths. Depending on the characteristics of the knowledge graph and the dimensionality of the learned embedding representations, this method in addition to being more exact, can be faster and take less memory than a method that explicitly generates and prunes full relation paths. We first define a new representation function for paths of the form (s, π, t) = (s, r1, e1, r2, e2 . . . , en−1, rn, t), which has as a special case the relation-type based function used in prior work, but can additionally model the impact of intermediate nodes ej on the path representation. We introduce new parameters wei which can impact the representations of paths passing through nodes ei. wei is a scalar weight in our implementation, but vectors of dimensionality d could also be implemented using the same general approach. The embedding of the sample path π under a BILINEAR model is defined as: Φπ = Wr1 tanh(we1) · · · Wrn tanh(wen). Here the weight of each node is first transformed into the range [−1, 1] using a non-linear tanh function. To derive our exact algorithm, we 1437 use quantities Fl(s, t) denoting the weighted sum of path representations for all paths of length l between nodes s and t. The weighted sum of path representations F(s, t) can be written as: X l=1...L wlFl(s, t), (5) where Fl(s, t) = P π∈Pl(s,t) p(s|t, π)Φπ and Pl(s, t) denotes all paths of length l between s, t. Computing F(s, t) in the naive way by enumerating all possible (s, t) paths is impractical since the number of possible paths can be very large, especially in graphs containing text. Therefore prior work selected a subset of paths through sampling and pruning (Neelakantan et al., 2015; Lin et al., 2015). However, the properties of the BILINEAR composition function for path representation enable us to incrementally build the sums of all path representations exactly, using dynamic programming. Algorithm 1 shows how to compute the necessary quantities Fl(s, t) for all entity pairs in G. 3 Algorithm 1 Compute the sum of path representations (up to length L) for every entity pair (s, t). Input : G = (E, R), {Wr|r ∈R}, list of all entity pairs EP = {(s, t)}. Output : FL(s, t) for every (s, t) ∈EP. Initialize: for (s, t) ∈EP do if R(s, t) ̸= ∅then F1(s, t) ←P r∈R(s,t) tanh(wt)p(t|s, r)Wr(s,t) else F1(s, t) ←0 end end for l = 2, . . . , L do for (s, t) ∈EP do Fl(s, t) ←0 for e ∈ν(s) do Fl(s, t) ←Fl(s, t) + F1(s, e)Fl−1(e, t) end end end The key to this solution is that the representations of the longer paths are composed of those of the shorter paths and that the random walk probabilities of relation paths are products of transition probabilities over individual relation edges. The sub-components of paths frequently overlap. For example, all paths π depicted in Figure 2 share the same tail edge family. The algorithms from prior 3We further speed up the computation by pre-computing, for each node s, the set of notes t reachable via paths of length l. Then we iterate over non-zero entries only, instead of all pairs in the loops. work perform a sum over product representations of paths, but it is more efficient to regroup these into a product of sums. This regrouping allows taking into account exponentially many possible paths without explicitly enumerating them and individually computing their path-constrained random walk probabilities. After all quantities Fl(s, t) are computed, their weighted sum F(s, t) can be computed using O dL  operations for each entity pair. These can directly be used to compute the scores of all positive and negative triples for training, using (eq. 4). As we can see, no explicit enumeration or storage of multi-step relation paths between pairs of nodes is needed. To compute gradients of the loss function with respect to individual model parameters, we use a similar algorithm to compute the error for each intermediate edge (ei, r′, ej) for each group of length l paths between s and t. The algorithm is a variant of forward-backward, corresponding to the forward Algorithm 1. Notice that directly adding node representations for the paths in the PRUNED-PATHS approach would be infeasible since it would lead to an exposition in the possible path types and therefore the memory and running time requirements of the model. Thus the use of an adaptation of our dynamic programming algorithm to the task of summing over node sequences for a given path type would become necessary to augment the parametric family of the PRUNED-PATHS approach with node representations. 3.3 Efficiency Analysis We perform a worst-case analysis of the time and memory required for training and evaluation for each of the introduced methods. For the models taking into account relation paths, we assume that all paths up to a certain length L will be modeled4. For the basic model text is not taken into account in training, since modeling text and KB relations uniformly in this model did not help performance as seen in the Experiments section. We only take text into account as a source of relation path features or auxiliary path facts. Notation Let Ne denote the number of nodes in the knowledge graph, Ekb the number of knowl4If only a subset of paths is used, the complexity of the methods other than ALL-PATHS will be reduced. However, in order to compare the methods in a similar setting, we analyze performance in the case of modeling all multi-step paths. 1438 edge graph links/triples, Etxt the number of textual links/triples, a the average number of outgoing links for a node in the graph given the outgoing relation type, Nr the number of distinct relations in the graph, η the number of negative triples for each training triple, and d the dimensionality of entity embeddings and (diagional) matrix representations of relations. BILINEAR-DIAG In this basic model, the training time is O 2d(η + 1)Ekb  and memory is O dNe + dNr  . The memory required is for storing embeddings of entities and relations, and the time is for scoring 2(η + 1) triples for every fact in the training KB. Guu et al. (2015) This method generates training path triples {(x, π, y)} for all entity pairs connected by paths π up to length L. The number of such triples5 is T = Ekb + Etxt + P l=2...L Nrl × Ne × al. The memory required is the memory to store all triples and the set of relation paths, which is O T + P l=2...L lNrl . The time required includes the time to compute the scores and gradients for all triples and their negative examples (for subject and object position), as well as to compute the compositional path representations of all path types. The time to compute compositional path representations is O d P l=2...L lNrl and the time spent per triple is 2d(η + 1) as in the basic model. Therefore the overall time per iteration is O 2d(η + 1)T  +O d P l=2...L lNrl . The test time and memory requirements of this method are the same as these of BILINEAR-DIAG, which is a substantial advantage over other methods, if evaluation-time efficiency is important. PRUNED-PATHS This method computes and stores the values of the random walk probabilities for all pairs of nodes and relation paths, for which these probabilities are non-zero. This can be done in time O T  where Triples is the same quantity used in the analysis of Guu et al. (2015). The memory requirements of this method are the same as these of (Guu et al., 2015), up to a constant to store random-walk probabilities for paths. The time requirements are different, however. At training time, we compute scores and update gradients for triples corresponding to direct 5The computation uses the fact that the number of path type sequences of length l is N l r. We use a, the average branching factor of nodes given relation types, to derive the estimated number of triples of a given relation type for a path of length l. knowledge base edges, whose number is Ekb. For each considered triple, however, we need to compute the sum of representations of path features that are active for the triple. We estimate the average number of active paths per node pair as T Ne2 . Therefore the overall time for this method per training iteration is O 2d(η + 1)Ekb T Ne2  +O d P l=2...L lNrl . We should note that whether this method or the one of Guu et al. (2015) will be faster in training depends on whether the average number of paths per node pair multiplied by Ekb is bigger or smaller than the total number of triples T . Unlike the method of Guu et al. (2015), the evaluation-time memory requirements of this approach are the same as its training memory requirements, or they could be reduced slightly to match the evaluation-time memory requirements of ALL-PATHS, if these are lower as determined by the specific problem instance. ALL-PATHS This method does not explicitly construct or store fully constructed paths (s, π, t). Instead, memory and time is determined by the dynamic program in Algorithm 1, as well as the forward-backward algorithm for computation of gradients. The memory required to store path representation sums Fl(s, t) is O dLNe2 in the worst case. Denote E = Ekb + Etxt. The time to compute these sums is O dE(1 + P l=2...L(l − 1)Ne)  . After this computation, the time to compute the scores of training positive and negative triples is O d2(η + 1)EkbL  . The time to increment gradients using each triple considered in training is O dEL2 . The evaluation time memory is reduced relative to training time memory by a factor of L and the evaluation time per triple can also be reduced by a factor of L using precomputation. Based on this analysis, we computed training time and memory estimates for our NCI+Txt knowledge base. Given the values of the quantities from our knowledge graph and d = 50, η = 50, and maximum path length of 5, the estimated memory for (Guu et al., 2015) and PRUNEDPATHS is 4.0×1018 and for ALL-PATHS the memory is 1.9×109. The time estimates are 2.4×1021, 2.6 × 1025, and 7.3 × 1015 for (Guu et al., 2015), PRUNED-PATHS, and ALL-PATHS, respectively. 1439 Model KB KB and Text MAP HITS@10 MAP HITS@10 BILINEAR-DIAG (Guu et al., 2015) d=100 12.48 19.66 12.48 19.66 BILINEAR-DIAG d=100 28.56 39.92 28.56 39.92 BILINEAR-DIAG d=2000 30.16 42.51 30.16 42.51 +Guu et al. (2015) d=100 orig. 23.20 34.84 23.20 34.84 +Guu et al. (2015) d=100 reimpl. 29.13 40.59 30.25 41.45 PRUNED-PATHS d=100 c=1000 32.31 43.16 36.42 48.22 PRUNED-PATHS d=100 c=100 32.31 43.16 36.79 48.27 PRUNED-PATHS d=100 c=1 32.31 43.16 37.03 48.26 ALL-PATHS d=100 32.31 43.16 36.24 48.60 ALL-PATHS+NODES d=100 33.92 45.96 39.31 52.53 Table 1: KB completion results on NCI-PID test: comparison of our compositional learning approach (ALL-PATHS+NODES) with baseline systems. d is the embedding dimension; sampled paths occurring less than c times were pruned in PRUNED-PATHS. 4 Experiments Our experiments are designed to study three research questions: (i) What is the impact of using path representations as a source of compositional regularization as in (Guu et al., 2015) versus using them as features for scoring as in PRUNED-PATHS and ALL-PATHS? (ii) What is the impact of using textual mentions for KB completion in different models? (iii) Does modeling intermediate path nodes improve the accuracy of KB completion? Datasets We used two datasets for evaluation: NCI-PID and WordNet. For the first set of experiments, we used the Pathway Interaction Database (NCI-PID) (Schaefer et al., 2009) as our knowledge base, which was created by editors from the Nature Publishing Groups, in collaboration with the National Cancer Institute. It contains a collection of highquality gene regulatory networks (also referred to as pathways). The original networks are in the form of hypergraphs, where nodes could be complex gene products (e.g., “protein complex” with multiple proteins bound together) and regulations could have multiple inputs and outputs. Following the convention of most network modeling approaches, we simplified the hypergraphs into binary regulations between genes (e.g., GRB2 positive reg MAPK3), which yields a graph with 2774 genes and 14323 triples. The triples are then split into train, dev, and test sets, of size 10224, 1315, 2784, respectively. We identified genes belonging to the same family via the common letter prefix in their names, which adds 1936 triples to training. As a second dataset, we used a WordNet KB with the same train, dev, and test splits as Guu et al. (2015). There are 38,696 entities and 11 types of knowledge base relations. The KB includes 112,581 triples for training, 2,606 triples for validation, and 10,544 triples for testing. WordNet does not contain textual relations and is used for a more direct comparison with recent works. Textual Relations We used PubMed abstracts for text for NCI-PID. We used the gene mentions identified by Literome (Poon et al., 2014), and considered sentences with co-occurring gene pairs from NCI-PID. We defined textual relations using the fully lexicalized dependency paths between two gene mentions, as proposed in Riedel et al. (2013). Additionally, we define trigger-mapped dependency paths, where only important “trigger” words are lexicalized and the rest of the words are replaced with a wild-card character X. A set of 333 words often associated with regulation events in Literome (e.g. induce, inhibit, reduce, suppress) were used as trigger words. To avoid introducing too much noise, we only included textual relations that occur at least 5 times between mentions of two genes that have a KB relation. This resulted in 3,827 distinct textual relations and 1,244,186 mentions.6 The number of textual relations is much larger than that of KB relations, and it helped induce much larger connectivity among genes (390,338 pairs of genes are directly connected in text versus 12,100 pairs in KB). Systems ALL-PATHS denotes our compositional learning approach that sums over all paths using 6Modeling such a large number of textual relations introduces sparsity, which necessitates models such as (Toutanova et al., 2015; Verga et al., 2015) to derive composed representations of text. We leave integration with such methods for future work. 1440 dynamic programming; ALL-PATHS+NODES additionally models nodes in the paths. PRUNEDPATHS denotes the traditional approach that learns from sampled paths detailed in §3.1.2; paths with occurrence less than a cutoff are pruned (c = 1 in Table 1 means that all sampled paths are used). The most relevant prior approach is Guu et al. (2015). We ran experiments using both their publicly available code and our re-implementation. We also included the BILINEAR-DIAG baseline. Implementation Details We used batch training with RProp (Riedmiller and Braun, 1993). The L2 penalty λ was set to 0.1 for all models, and the entity vectors xe were normalized to unit vectors. For each positive example we sample 500 negative examples. For our implementation of (Guu et al., 2015), we run 5 random walks of each length starting from each node and we found that adding a weight β to the multi-step path triples improves the results. After preliminary experimentation, we fixed β to 0.1. Models using KB and textual relations were initialized from models using KB relations only7. Model training was stopped when the development set MAP did not improve for 40 iterations; the parameters with the best MAP on the development set were selected as output. Finally, we used only paths of length up to 3 for NCI-PID and up to length 5 for WordNet.8 Evaluation metrics We evaluate our models on their ability to predict the subjects/objects of knowledge base triples in the test set. Since the relationships in the gene regulation network are frequently not one-to-one, we use the mean average precision (MAP) measure instead of the mean reciprocal rank often used in knowledge base completion works in other domains. In addition to MAP, we use Hits@10, which is the percentage of correct arguments ranked among the top 10 predictions9. We compute measures for ranking both the object entities (s, r, ∗) and the subject entities(∗, r, t). We report evaluation metrics computed on the union of the two query set. 7In addition, models using path training were initialized from models trained on direct relation links only. 8Using paths up to length 4 on NCI-PID did not perform better. 9As a common practice in KB completion evaluation, for both MAP and Hits@10, we filtered out the other correct answers when ranking a particular triple to eliminate ranking penalization induced by other correct predictions. NCI-PID Results Table 1 summarizes the knowledge base (KB) completion results on the NCI-PID test. The rows compare our compositional learning approach ALL-PATHS+NODES with prior approaches. The comparison of the two columns demonstrates the impact when text is jointly embedded with KB. Our compositional learning approach significantly outperforms all other approaches in both evaluation metrics (MAP and HITS@10). Moreover, jointly embedding text and KB led to substantial improvement, compared to embedding KB only.10 Finally, modeling nodes in the paths offers significant gains (ALLPATHS+NODE gains 3 points in MAP over ALLPATHS), with statistical significance (p < .001) according to a McNemar test. Evaluating the effect of path pruning on the traditional approach (PRUNED-PATHS) is quite illuminating. As the number of KB relations is relatively small in this domain, when only KB is embedded, most paths occur frequently. So there is little difference between the heaviest pruned version (c=1000) and the lightest (c=1). When textual relations are included, the cutoff matters more, although the difference was small as many rarer textual relations were already filtered beforehand. In either case, the accuracy difference between ALLPATHS and PRUNED-PATHS is small, and ALLPATHS mainly gains in efficiency. However, when nodes are modeled, the compositional learning approach gains in accuracy as well, especially when text is jointly embedded. Comparison among the baselines also offers valuable insights. The implementation of Guu et al. (2015) with default parameters performed significantly worse than our re-implementation. Also, our re-implementation achieves only a slight gain over the BILINEAR-DIAG baseline, whereas the original implementation obtains substantial improvement over its own version of BILINEARDIAG. These results underscore the importance of hyper-parameters and optimization, and invite future systematic research on the impact of such modeling choices.11 10Text did not help for the models in the first four rows in the Table, possibly because in these approaches text and KB information are equally weighted in the loss function and the more numerous textual triples dominate the KB ones. 11The differences between our two implementations are: max-margin loss versus softmax loss and stochastic gradient training versus batch training. 1441 Model MAP HITS@10 BILINEAR-DIAG (Guu et al., 2015) N/A 12.9 BILINEAR-DIAG 8.0 12.2 +Guu et al. (2015) N/A 14.4 PRUNED-PATHS l = 3 c=10 9.5 14.8 PRUNED-PATHS l = 3 c=1 9.5 14.9 PRUNED-PATHS l = 5 c=10 8.9 14.4 ALL-PATHS l = 3 9.4 14.7 ALL-PATHS+NODES l=3 9.4 15.2 ALL-PATHS l = 5 9.6 16.6 ALL-PATHS+NODES l=5 9.8 16.7 Table 2: KB completion results on the WordNet test set: comparison of our compositional learning approach (ALL-PATHS) with baseline systems. The maximum length of paths is denoted by l. Sampled paths occurring less than c times were pruned in PRUNED-PATHS. WordNet Results Table 2 presents a comparative evaluation on the WordNet dataset. This dataset has a larger number of knowledge base relation types compared to NCI-PID, and longer relation paths in this KB are expected to be beneficial. Guu et al. (2015) evaluated their compositional regularization approach on this dataset and we can directly compare to their results. The first two rows in the Table show the baseline BILINEAR-DIAG model results according to the results reported in (Guu et al., 2015) and our implementation. The MAP results were not reported in Guu et al. (2015); hence the NA value for MAP in row one.12 On this dataset, our implementation of the baseline model does not have substantially different results than Guu et al. (2015) and we use their reported results for the baseline and compositionally trained model. Compositional training improved performance in Hits@10 from 12.9 to 14.4 in Guu et al. (2015), and we find that using PRUNED-PATHS as features gives similar, but a bit higher performance gains. The PRUNED-PATHS method is evaluated using count cutoffs of 1 and 10, and maximum path lengths of 3 and 5. As can be seen, lower count cutoff performed better for paths up to length 3, but we could not run the method with path lengths up to 5 and count cutoff of 1, due to excessive memory requirements (more than 248GB). When using count cutoff of 10, paths up to length 5 performed worse than paths up to length 3. This performance degradation could be avoided with 12We ran the trained model distributed by Guu et al. (2015) and obtained a much lower Hits@10 value of 6.4 and MAP of of 3.5. Due to the discrepancy, we report the original results from the authors’ paper which lack MAP values instead. a staged training regiment where models with shorter paths are first trained and used to initialize models using longer paths. The performance of the ALL-PATHS method can be seen for maximum paths up to lengths 3 and 5, and with or without using features on intermediate path nodes.13 As shown in Table 2, longer paths were useful, and features on intermediate nodes were also beneficial. We tested the significance of the differences between several pairs of models and found that nodes led to significant improvement (p < .002) for paths of length up to 3, but not for the setting with longer paths. All models using path features are significantly better than the baseline BILINEAR-DIAG model. To summarize both sets of experiments, the ALL-PATHS approach allows us to efficiently include information from long KB relation paths as in WordNet, or paths including both text and KB relations as in NCI-PID. Our dynamic programming algorithm considers relation paths efficiently, and is also straightforwardly generalizable to include modeling of intermediate path nodes, which would not be directly possible for the PRUNED-PATHS approach. Using intermediate nodes was beneficial on both datasets, and especially when paths could include textual relations as in the NCI-PID dataset. 5 Conclusions In this work, we propose the first approach to efficiently incorporate all relation paths of bounded length in a knowledge base, while modeling both relations and intermediate nodes in the compositional path representations. Experimental results on two datasets show that it outperforms prior approaches by modeling intermediate path nodes. In the future, we would like to study the impact of relation paths for additional basic KB embedding models and knowledge domains. Acknowledgments The authors would like to thank the anonymous reviewers and meta-reviewer for suggestions on making the paper stronger, as well as Kenton Lee and Aria Haghighi for their helpful comments on the draft. 13To handle the larger scale of the WordNet dataset in terms of the number of nodes in the knowledge base, we enforced a maximum limit (of 100) on the degree of a node that can be an intermediate node in a multi-step relation path. 1442 References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. Springer. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA, October. Association for Computational Linguistics. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems (NIPS). Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In Empirical Methods in Natural Language Processing (EMNLP). Alberto Garcia-Duran, Antoine Bordes, and Nicolas Usunier. 2015. Composing relationships with translations. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 286–290, Lisbon, Portugal, September. Association for Computational Linguistics. Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, and Tom Mitchell. 2013. Improving learning and inference in a large knowledge-base using latent syntactic cues. In Empirical Methods in Natural Language Processing (EMNLP). Matt Gardner, Partha Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incorporating vector space similarity in random walk inference over knowledge bases. In Empirical Methods in Natural Language Processing (EMNLP). Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 318–327, Lisbon, Portugal, September. Association for Computational Linguistics. Ni Lao and William W Cohen. 2010. Relational retrieval using a combination of path-constrained random walks. Machine learning. Ni Lao, Amarnag Subramanya, Fernando Pereira, and William W Cohen. 2012. Reading the web with learned syntactic-semantic inference rules. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 705–714, Lisbon, Portugal, September. Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACLIJCNLP). Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In ACL. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 809–816. Hoifung Poon, Chris Quirk, Charlie DeZiel, and David Heckerman. 2014. Literome: Pubmed-scale genomic knowledge base in the cloud. Bioinformatics, 30(19):2840–2842. Hoifung Poon, Kristina Toutanova, and Chris Quirk. 2015. Distant supervision for cancer pathway extraction from text. In PSB. Sebastian Riedel, Limin Yao, Benjamin M. Marlin, and Andrew McCallum. 2013. Relation extraction with matrix factorization and universal schemas. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Martin Riedmiller and Heinrich Braun. 1993. A direct adaptive method for faster backpropagation learning: The rprop algorithm. In Neural Networks, 1993., IEEE International Conference on, pages 586–591. IEEE. Carl F Schaefer, Kira Anthony, Shiva Krupa, Jeffrey Buchoff, Matthew Day, Timo Hannay, and Kenneth H Buetow. 2009. PID: the pathway interaction database. Nucleic acids research, 37(suppl 1):D674–D679. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems (NIPS). 1443 Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang. 2015. Open domain question answering via semantic enrichment. In Proceedings of the 24th International Conference on World Wide Web, pages 1045–1055. International World Wide Web Conferences Steering Committee. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multiinstance multi-label learning for relation extraction. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66. Association for Computational Linguistics. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. 2015. Multilingual relation extraction using compositional universal schema. arxiv, abs/1511.06396. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In International Conference on Learning Representations (ICLR). Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321–1331, Beijing, China, July. Association for Computational Linguistics. 1444
2016
136
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1445–1455, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Commonsense Knowledge Base Completion Xiang Li∗‡ Aynaz Taheri† Lifu Tu‡ Kevin Gimpel‡ ∗University of Chicago, Chicago, IL, 60637, USA †University of Illinois at Chicago, Chicago, IL, 60607, USA ‡Toyota Technological Institute at Chicago, Chicago, IL, 60637, USA [email protected], [email protected], {lifu,kgimpel}@ttic.edu Abstract We enrich a curated resource of commonsense knowledge by formulating the problem as one of knowledge base completion (KBC). Most work in KBC focuses on knowledge bases like Freebase that relate entities drawn from a fixed set. However, the tuples in ConceptNet (Speer and Havasi, 2012) define relations between an unbounded set of phrases. We develop neural network models for scoring tuples on arbitrary phrases and evaluate them by their ability to distinguish true held-out tuples from false ones. We find strong performance from a bilinear model using a simple additive architecture to model phrases. We manually evaluate our trained model’s ability to assign quality scores to novel tuples, finding that it can propose tuples at the same quality level as mediumconfidence tuples from ConceptNet. 1 Introduction Many ambiguities in natural language processing (NLP) can be resolved by using knowledge of various forms. Our focus is on the type of knowledge that is often referred to as “commonsense” or “background” knowledge. This knowledge is rarely expressed explicitly in textual corpora (Gordon and Van Durme, 2013). Some researchers have developed techniques for inferring this knowledge from patterns in raw text (Gordon, 2014; Angeli and Manning, 2014), while others have developed curated resources of commonsense knowledge via manual annotation (Lenat and Guha, 1989; Speer and Havasi, 2012) or games with a purpose (von Ahn et al., 2006). Curated resources typically have high precision but suffer from a lack of coverage. For cerrelation right term conf. MOTIVATEDBYGOAL relax 3.3 USEDFOR relaxation 2.6 MOTIVATEDBYGOAL your muscle be sore 2.3 HASPREREQUISITE go to spa 2.0 CAUSES get pruny skin 1.6 HASPREREQUISITE change into swim suit 1.6 Table 1: ConceptNet tuples with left term “soak in hotspring”; final column is confidence score. tain resources, researchers have developed methods to automatically increase coverage by inferring missing entries. These methods are commonly categorized under the heading of knowledge base completion (KBC). KBC is widelystudied for knowledge bases like Freebase (Bollacker et al., 2008) which contain large sets of entities and relations among them (Mintz et al., 2009; Nickel et al., 2011; Riedel et al., 2013; West et al., 2014), including recent work using neural networks (Socher et al., 2013; Yang et al., 2014). We improve the coverage of commonsense resources by formulating the problem as one of knowledge base completion. We focus on a particular curated commonsense resource called ConceptNet (Speer and Havasi, 2012). ConceptNet contains tuples consisting of a left term, a relation, and a right term. The relations come from a fixed set. While terms in Freebase tuples are entities, ConceptNet terms can be arbitrary phrases. Some examples are shown in Table 1. An NLP application may wish to query ConceptNet for information about soaking in a hotspring, but may use different words from those contained in the ConceptNet tuples. Our goal is to do on-the-fly knowledge base completion so that queries can be answered robustly without requiring the precise linguistic forms contained in ConceptNet. To do this, we develop neural network models to embed terms and provide scores to arbi1445 trary tuples. We train them on ConceptNet tuples and evaluate them by their ability to distinguish true and false held-out tuples. We consider several functional architectures, comparing two composition functions for embedding terms and two functions for converting term embeddings into tuple scores. We find that all architectures are able to outperform several baselines and reach similar performance on classifying held-out tuples. We also experiment with several training objectives for KBC, finding that a simple cross entropy objective with randomly-generated negative examples performs best while also being fastest. We manually evaluate our trained model’s ability to assign quality scores to novel tuples, finding that it can propose tuples at the same quality level as medium-confidence tuples from ConceptNet. We release all of our resources, including our ConceptNet KBC task data, large sets of randomly-generated tuples scored with our model, training code, and pretrained models with code for calculating the confidence of novel tuples.1 2 Related Work Our methods are similar to past work on KBC (Mintz et al., 2009; Nickel et al., 2011; Lao et al., 2011; Nickel et al., 2012; Riedel et al., 2013; Gardner et al., 2014; West et al., 2014), particularly methods based on distributed representations and neural networks (Socher et al., 2013; Bordes et al., 2013; Bordes et al., 2014a; Bordes et al., 2014b; Yang et al., 2014; Neelakantan et al., 2015; Gu et al., 2015; Toutanova et al., 2015). Most prior work predicts new relational links between terms drawn from a fixed set. In a notable exception, Neelakantan and Chang (2015) add new entities to KBs using external resources along with properties of the KB itself. Relatedly, Yao et al. (2013) induce an unbounded set of entity categories and associate them with entities in KBs. Several researchers have developed techniques for discovering commonsense knowledge from text (Gordon et al., 2010; Gordon and Schubert, 2012; Gordon, 2014; Angeli and Manning, 2014). Open information extraction systems like REVERB (Fader et al., 2011) and NELL (Carlson et al., 2010) find tuples with arbitrary terms and relations from raw text. In contrast, we start with a set of commonsense facts to use for train1Available at http://ttic.uchicago.edu/ ˜kgimpel/commonsense.html. ing, though our methods could be applied to the output of these or other extraction systems. Our goals are similar to those of the AnalogySpace method (Speer et al., 2008), which uses matrix factorization to improve coverage of ConceptNet. However, AnalogySpace can only return a confidence score for a pair of terms drawn from the training set. Our models can assign scores to tuples that contain novel terms (as long as they consist of words in our vocabulary). Though we use ConceptNet, similar techniques can be applied to other curated resources like WordNet (Miller, 1995) and FrameNet (Baker et al., 1998). For WordNet, tuples can contain lexical entries that are linked via synset relations (e.g., “hypernym”). WordNet contains many multiword entries (e.g., “cold sweat”), which can be modeled compositionally by our term models; alternatively, entire glosses could be used as terms. To expand frame relationships in FrameNet, tuples can draw relations from the frame relation types (e.g., “is causative of”) and terms can be frame lexical units or their definitions. Several researchers have used commonsense knowledge to improve language technologies, including sentiment analysis (Cambria et al., 2012; Agarwal et al., 2015), semantic similarity (Caro et al., 2015), and speech recognition (Lieberman et al., 2005). Our hope is that our models can enable many other NLP applications to benefit from commonsense knowledge. Our work is most similar to that of Angeli and Manning (2013). They also developed methods to assess the plausibility of new facts based on a training set of facts, considering commonsense data from ConceptNet in one of their settings. Like us, they can handle an unbounded set of terms by using (simple) composition functions for novel terms, which is rare among work in KBC. One key difference is that their best method requires iterating over the KB at test time, which can be computationally expensive with large KBs. Our models do not require iterating over the training set. We compare to several baselines inspired by their work, and we additionally evaluate our model’s ability to score novel tuples derived from both ConceptNet and Wikipedia. 3 Models Our goal is to represent commonsense knowledge such that it can be used for NLP tasks. We as1446 sume this knowledge is given in the form of tuples ⟨t1, R, t2⟩, where t1 is the left term, t2 is the right term, and R is a (directed) relation that exists between the terms. Examples are shown in Table 1.2 Given a set of tuples, our goal is to develop a parametric model that can provide a confidence score for new, unseen tuples. That is, we want to design and train models that define a function score(t1, R, t2) that provides a quality score for an arbitrary tuple ⟨t1, R, t2⟩. These models will be evaluated by their ability to distinguish true heldout tuples from false ones. We describe two model families for scoring tuples. We assume that we have embeddings for words and define models that use these word embeddings to score tuples. So our models are limited to tuples in which terms consist of words in the word embedding vocabulary, though future work could consider character-based architectures for open-vocabulary modeling (Huang et al., 2013; Ling et al., 2015). 3.1 Bilinear Models We first consider bilinear models, since they have been found useful for KBC in past work (Nickel et al., 2011; Jenatton et al., 2012; Garc´ıa-Dur´an et al., 2014; Yang et al., 2014). A bilinear model has the following form for a tuple ⟨t1, R, t2⟩: v⊤ 1 MR v2 where v1 ∈Rr is the (column) vector representing t1, v2 ∈Rr is the vector for t2, and MR ∈Rr×r is the parameter matrix for relation R. To convert terms t1 and t2 into term vectors v1 and v2, we consider two possibilities: word averaging and a bidirectional long short-term memory (LSTM) recurrent neural network (Hochreiter and Schmidhuber, 1997). This provides us with two models: Bilinear AVG and Bilinear LSTM. One downside of this architecture is that as the length of the term vectors grows, the size of the relation matrices grows quadratically. This can slow down training while requiring more data to learn the large numbers of parameters in the matrices. To address this, we include an additional nonlinear transformation of each term: ui = a(W (B)vi + b(B)) 2These examples are from the Open Mind Common Sense (OMCS) part of ConceptNet version 5 (Speer and Havasi, 2012). In our experiments below, we only use OMCS tuples. where a is a nonlinear activation function (tuned among ReLU, tanh, and logistic sigmoid) and where we have introduced additional parameters W (B) and b(B). This gives us the following model: scorebilinear(t1, R, t2) = u⊤ 1 MR u2 When using the LSTM, we tune the decision about how to produce the final term vectors to pass to the bilinear model, including possibly using the final vectors from each direction and the output of max or average pooling. We use the same LSTM parameters for each term. 3.2 Deep Neural Network Models Our second family of models is based on deep neural networks (DNNs). While bilinear models have been shown to work well for KBC, their functional form makes restrictions about how terms can interact. DNNs make no such restrictions. As above, we define two models, one based on using word averaging for the term model (DNN AVG) and one based on LSTMs (DNN LSTM). For the DNN AVG model, we obtain the term vectors v1 and v2 by averaging word vectors in the respective terms. We then concatenate v1, v2, and a relation vector vR to form the input of the DNN, denoted vin. The DNN uses a single hidden layer: u = a(W (D1)vin + b(D1)) scoreDNN(t1, R, t2) = W (D2)u + b(D2) (1) where a is again a (tuned) nonlinear activation function. The size of the hidden vector u is tuned, but the output dimensionality (the numbers of rows in W (D2) and b(D2)) is fixed to 1. We do not use a nonlinear activation for the final layer since our goal is to output a scalar score. For the DNN LSTM model, we first create a single vector for the two terms using an LSTM. That is, we concatenate t1, a delimiter token, and t2 to create a single word sequence. We use a bidirectional LSTM to convert this word sequence to a vector, again possibly using pooling (the decision is tuned; details below). We concatenate the output of this bidirectional LSTM with the relation vector vr to create the DNN input vector vin, then use Eq. 1 to obtain a score. We found this to work better than separately using an LSTM on each term. We can not try this for the Bilinear LSTM model since its functional form separates the two term vectors. 1447 The relation vectors vR are learned in addition to the DNN parameters W (D1), W (D2), b(D1), b(D2), and the LSTM parameters (in the case of the DNN LSTM model). Also, word embedding parameters are updated in all settings. 4 Training Given a tuple training set T, we train our models using two different loss functions: hinge loss and a binary cross entropy function. Both rely on ways of generating negative examples (Section 4.3). Both also use regularization (Section 4.4). 4.1 Hinge Loss Given a training tuple τ = ⟨t1, R, t2⟩, the hinge loss seeks to make the score of τ larger than the score of negative examples by a margin of at least γ. This corresponds to minimizing the following loss, summed over all examples τ ∈T: losshinge(τ) = max{0, γ −score(τ) + score(τneg(t1))} + max{0, γ −score(τ) + score(τneg(R))} + max{0, γ −score(τ) + score(τneg(t2))} where τneg(t1) is the negative example obtained by replacing t1 in τ with some other t1, and τneg(R) and τneg(t2) are defined analogously for the relation and right term. We describe how we generate these negative examples in Section 4.3 below. 4.2 Binary Cross Entropy Though we only have true tuples in our training set, we can create a binary classification problem by assigning a label of 1 to training tuples and a label of 0 to negative examples. Then we can minimize cross entropy (CE) as is common when using neural networks for classification. To generate negative examples, we consider the methods described in Section 4.3 below. We also need to convert our models’ scores into probabilities, which we do by using a logistic sigmoid σ on score. We denote the label as ℓ, where the label is 1 if the tuple is from the training set and 0 if it is a negative example. Then the loss is defined: lossCE(τ, ℓ) = −ℓlog σ(score(τ)) −(1−ℓ) log(1 −σ(score(τ))) When using this loss, we generate three negative examples for each positive example (one for swapping each component of the tuple, as in the hinge loss). For a mini-batch of size β, there are β positive examples and 3β negative examples used for training. The loss is summed over these 4β examples yielded by each mini-batch. 4.3 Negative Examples For the loss functions above, we need ways of automatically generating negative examples. For efficiency, we consider using the current mini-batch only, as our models are trained using optimization on mini-batches. We consider the following three strategies to construct negative examples. Each strategy constructs three negative examples for each positive example τ: one by replacing t1, one by replacing R, and one by replacing t2. Random sampling. We create the three negative examples for τ by replacing each component with its counterpart in a randomly-chosen tuple in the same mini-batch. Max sampling. We create the three negative examples for τ by replacing each component with its counterpart in some other tuple in the minibatch, choosing the substitution to maximize the score of the resulting negative example. For example, when swapping out t1 in τ = ⟨t1, R, t2⟩, we choose the substitution t′ 1 as follows: t′ 1 = argmax t:⟨t,R′,t′ 2⟩∈µ\τ score(t, R, t2) where µ is the current mini-batch of tuples. We perform the analogous procedure for R and t2. Mix sampling. This is a mixture of the above, using random sampling 50% of the time and max sampling the remaining 50% of the time. 4.4 Regularization We use L2 regularization. For the DNN models, we add the penalty term λ∥θ∥2 to the losses, where λ is the regularization coefficient and θ contains all other parameters. However, for the bilinear models we regularize the relation matrices MR toward the identity matrix instead of all zeroes, adding the following to the loss: λ1∥θ∥2 + λ2 X R ∥MR −Ir∥2 2 where Ir is the r × r identity matrix, the summation is performed over all relations R, and θ represents all other parameters. 1448 5 Experimental Setup We now evaluate our tuple models. We measure whether our models can distinguish true and false tuples by training a model on a large set of tuples and testing on a held-out set. 5.1 Task Design The tuples are obtained from the Open Mind Common Sense (OMCS) entries in the ConceptNet 5 dataset (Speer and Havasi, 2012). They are sorted by a confidence score. The most confident 1200 tuples were reserved for creating our test set (TEST). The next most confident 600 tuples (i.e., those numbered 1201–1800) were used to build a development set (DEV1) and the next most confident 600 (those numbered 1801–2400) were used to build a second development set (DEV2). For each set S (S ∈{DEV1, DEV2, TEST}), for each tuple τ ∈S, we created a negative example and added it to S. So each set doubled in size. To create a negative example from τ ∈S, we randomly swapped one of the components of τ with another tuple τ ′ ∈S. One third of the time we swapped t1 in τ for t1 in τ ′, one third of the time we swapped their R’s, and the remaining third of the time we swapped their t2’s. Thus, distinguishing positive and negative examples in this task is similar to the objectives optimized during training. Each of DEV1 and DEV2 has 1200 tuples (600 positive examples and 600 negative examples), while TEST has 2400 tuples (1200 positive and 1200 negative). For training data, we selected 100,000 tuples from the remaining tuples (numbered 2401 and beyond). The task is to separate the true and false tuples in our test set. That is, the labels are 1 for true tuples and 0 for false tuples. Given a model for scoring tuples, we select a threshold by maximizing accuracy on DEV1 and report accuracies on DEV2. This is akin to learning the bias feature weight (using DEV1) of a linear classifier that uses our model’s score as its only feature. We tuned several choices—including word embeddings, hyperparameter values, and training objectives—on DEV2 and report final performance on TEST. One annotator (a native English speaker) attempted the same classification task on a sample of 100 tuples from DEV2 and achieved an accuracy of 95%. We release these datasets to the community so that others can work on this same task. 5.2 Word Embeddings Our tuple models rely on initial word embeddings. To help our models better capture the commonsense knowledge in ConceptNet, we generated word embedding training data using the OMCS sentences underlying our training tuples (we excluded the top 2400 tuples which were used for creating DEV1, DEV2, and TEST). We created training data by merging the information in the tuples and their OMCS sentences. Our goal was to combine the grammatical context of the OMCS sentences with the words in the actual terms, so as to ensure that we learn embeddings for the words in the terms. We also insert the relations into the OMCS sentences so that we can learn embeddings for the relations themselves. We describe the procedure by example and also release our generated data for ease of replication. The tuple ⟨soak in a hotspring, CAUSES, get pruny skin⟩was automatically extracted/normalized (by the ConceptNet developers) from the OMCS sentence “The effect of [soaking in a hotspring] is [getting pruny skin]” where brackets surround terms. We replace the bracketed portions with their corresponding terms and insert the relation between them: “The effect of soak in a hotspring CAUSES get pruny skin”. We do this for all training tuples.3 We used the word2vec (Mikolov et al., 2013) toolkit to train skip-gram word embeddings on this data. We trained for 20 iterations, using a dimensionality of 200 and a window size of 5. We refer to these as “CN-trained” embeddings for the remainder of this paper. Similar approaches have been used to learn embeddings for particular downstream tasks, e.g., dependency parsing (Bansal et al., 2014). We use our CN-trained embeddings within baseline methods and also provide the initial word embeddings of our models. For all of our models, we update the initial word embeddings during learning. In the baseline methods described below, we compare our CN-trained embeddings to pretrained word embeddings. We use the GloVe (Pennington et al., 2014) embeddings trained on 840 billion tokens of Common Crawl web text and the PARAGRAM-SimLex embeddings of Wieting et al. (2015), which were tuned to have strong performance on the SimLex-999 task (Hill et al., 2015). 3For reversed relations, indicated by an asterisk in the OMCS sentences, we swap t1 and t2 in the tuple. 1449 5.3 Baselines We consider three baselines inspired by those of Angeli and Manning (2013): • Similar Fact Count (Count): For each tuple τ = ⟨t1, R, t2⟩in the evaluation set, we count the number of similar tuples in the training set. A training tuple τ ′ = ⟨t′ 1, R′, t′ 2⟩is considered “similar” to τ if R = R′, one of the terms matches exactly, and the other term has the same head word. That is, (R = R′) ∧(t1 = t′ 1) ∧ (head(t2) = head(t′ 2)), or (R = R′) ∧(t2 = t′ 2) ∧(head(t1) = head(t′ 1)). The head word for a term was obtained by running the Stanford Parser (Klein and Manning, 2003) on the term. This baseline does not use word embeddings. • Argument Similarity (ArgSim): This baseline computes the cosine similarity of the vectors for t1 and t2, ignoring the relation. Vectors for t1 and t2 are obtained by word averaging. • Max Similarity (MaxSim): For tuple τ in an evaluation set, this baseline outputs the maximum similarity between τ and any tuple in the training set. The similarity is computed by concatenating the vectors for t1, R, and t2, then computing cosine similarity. As in ArgSim, we obtain vectors for terms by averaging their words. We only consider R when using our CN-trained embeddings since they contain embeddings for the relations. When using GloVe and PARAGRAM embeddings for this baseline, we simply use the two term vectors (still constructed via averaging the words in each term). We chose these baselines because they can all handle unbounded term sets but differ in their other requirements. ArgSim and MaxSim use word embeddings while Count does not. Count and MaxSim require iterating over the training set during inference while ArgSim does not. For each baseline, we tuned a threshold on DEV1 to maximize classification accuracy then tested on DEV2 and TEST. 5.4 Training and Tuning We used AdaGrad (Duchi et al., 2011) for optimization, training for 20 epochs through the training tuples. We separately tuned hyperparameters for each model and training objective. We tuned the following hyperparameters: the relation matrix size r for the bilinear models (also the length of the transformed term vectors, denoted u1 and u2 above), the activation a, the hidden layer size g for the DNN models, the relation vector length d for the DNN models, the LSTM hidden vector size h for models with LSTMs, the mini-batch size β, the regularization parameters λ, λ1, and λ2, and the AdaGrad learning rate α. All tuning used early stopping: periodically during training, we used the current model to find the optimal threshold on DEV1 and evaluated on DEV2. Due to computational limitations, we were unable to perform thorough grid searches for all hyperparameters. We combined limited grid searches with greedy hyperparameter tuning based on regions of values that were the most promising. For the Bilinear LSTM and DNN LSTM, we did hyperparameter tuning by training on the full training set of 100,000 tuples for 20 epochs, computing DEV2 accuracy once per epoch. For the averaging models, we tuned by training on a subset of 1000 tuples with β = 200 for 20 epochs; the averaging models showed more stable results and did not require the full training set for tuning. Below are the tuned hyperparameter values: • Bilinear AVG: for CE: r = 150, a = tanh, β = 200, α = 0.01, λ1 = λ2 = 0.001. Hinge loss: same values as above except α = 0.005. • Bilinear LSTM: for CE: r = 50, a = ReLU, h = 200, β = 800, α = 0.02, and λ1 = λ2 = 0.00001. To obtain vectors from the term bidirectional LSTMs, we used max pooling. For hinge loss: r = 50, a = tanh, h = 200, β = 400, α = 0.007, λ1 = 0.00001, and λ2 = 0.01. To obtain vectors from the term bidirectional LSTMs, we used the concatenation of average pooling and final hidden vectors in each direction. For each sampling method and loss function, α was tuned by grid search with the others fixed to the above values. • DNN AVG: for both losses: a = ReLU, d = 200, g = 1000, β = 600, α = 0.01, λ = 0.001. • DNN LSTM: for both losses: a = ReLU, d = 200, bidirectional LSTM hidden layer size h = 200, hidden layer dimension g = 800, β = 400, α = 0.005, and λ = 0.00005. To get vectors from the term LSTMs, we used max pooling. 6 Results Word Embedding Comparison. We first evaluate the quality of our word embeddings trained on the ConceptNet training tuples. Table 2 compares 1450 GloVe PARAGRAM CN-trained ArgSim 68 69 73 MaxSim 73 70 82 Table 2: Accuracies (%) on DEV2 of two baselines using three different sets of word embeddings. Our ConceptNet-trained embeddings outperform GloVe and PARAGRAM embeddings. DNN AVG DNN LSTM CE hinge CE hinge random 124 230 710 783 mix 20755 21045 25928 26380 max 39338 41867 49583 49427 Table 4: Loss function runtime comparison (seconds per epoch) of the DNN models. accuracies on DEV2 for the two baselines that use embeddings: ArgSim and MaxSim. We find that pretrained GloVe and PARAGRAM embeddings perform comparably, but both are outperformed by our ConceptNet-trained embeddings. We use the latter for the remaining experiments in this paper. Training Comparison. Table 3 shows the results of our models with the two loss functions and three sampling strategies. We find that the binary cross entropy loss with random sampling performs best across models. We note that our conclusion differs from some prior work that found max or mix sampling to be better than random (Wieting et al., 2016). We suspect that this difference may stem from characteristics of the ConceptNet training data. It may often be the case that the maxscoring negative example in the mini-batch is actually a true fact, due to the generic nature of the facts expressed. Table 4 shows a runtime comparison of the losses and sampling strategies.4 We find random sampling to be orders of magnitude faster than the others while also performing the best. Final Results. Our final results are shown in Table 5. We show the DEV2 and TEST accuracies for our baselines and for the best configuration (tuned on DEV2) for each model. All models outperform all baselines handily. Our models perform similarly, with the Bilinear models and the DNN AVG model all exceeding 90% on both DEV2 and TEST. We note that the AVG models performed strongly compared to those that used LSTMs for 4These experiments were performed using 2 threads on a 3.40-GHz Intel Core i7-3770 CPU with 8 cores. DEV2 TEST Count 75.4 79.0 ArgSim 72.9 74.2 MaxSim 81.9 83.5 Bilinear AVG 90.3 91.7 Bilinear LSTM 90.8 90.7 DNN AVG 91.3 92.0 DNN LSTM 88.1 89.2 Bilinear AVG + data 91.8 92.5 human ∼95.0 — Table 5: Accuracies (%) of baselines and final model configurations on DEV2 and TEST. “+ data” uses enlarged training set of size 300,000, and then doubles this training set by including tuples with conjugated forms; see text for details. Human performance on DEV2 was estimated from a sample of size 100. modeling terms. We suggest two reasons for this. The first is that most terms are short, with an average term length of 2.3 words in our training tuples. An LSTM may not be needed to capture long-distance properties. The second reason may be due to hyperparameter tuning. Recall that we used a greedy search for optimal hyperparameter values; we found that models with LSTMs take more time per epoch, more epochs to converge, and exhibit more hyperparameter sensitivity compared to models based on averaging. This may have contributed to inferior hyperparameter values for the LSTM models. We also trained the Bilinear AVG model on a larger training set (row labeled “Bilinear AVG + data”). We note that the ConceptNet tuples typically contain unconjugated forms; we sought to use both conjugated and unconjugated words. We began with a larger training set of 300,000 tuples from ConceptNet, then augmented them to include conjugated word forms as in the following example. For the tuple ⟨soak in a hotspring, CAUSES, get pruny skin⟩obtained from the OMCS sentence “The effect of [soaking in a hotspring] is [getting pruny skin]”, we generated an additional tuple ⟨soaking in a hotspring, CAUSES, getting pruny skin⟩. We thus created twice as many training tuples. The results with this larger training set improved from 91.7 to 92.5 on the test set. We release this final model to the research community. We note that the strongest baseline, MaxSim, is a nonparametric model that requires iterating over all training tuples to provide a score to new tuples. This is a serious bottleneck for use in NLP applications that may need to issue large numbers of 1451 Bilinear AVG Bilinear LSTM DNN AVG DNN LSTM CE hinge CE hinge CE hinge CE hinge random 90 84 91 83 91 87 88 57 mix 90 83 90 87 90 78 82 63 max 86 75 65 66 61 52 56 52 Table 3: Accuracies (%) on DEV2 of models trained with two loss functions (cross entropy (CE) and hinge) and three sampling strategies (random, mix, and max). The best accuracy for each model is shown in bold. Cross entropy with random sampling is best across models and is also fastest (see Table 4). queries. Our models are parametric models that can compress a large training set into a fixed number of parameters. This makes them extremely fast for answering queries, particularly the AVG models, enabling use in downstream NLP applications. 7 Generating and Scoring Novel Tuples We now measure our model’s ability to score novel tuples generated automatically from ConceptNet and Wikipedia. We first describe simple procedures to generate candidate tuples from these two datasets. We then score the tuples using our MaxSim baseline and the trained Bilinear AVG model.5 We evaluate the highly-scoring tuples using a small-scale manual evaluation. The DNN AVG and Bilinear AVG models reached the highest TEST accuracies in our evaluation, though in preliminary experiments we found that the Bilinear AVG model appeared to perform better when scoring the novel tuples described below. We suspect this is because the DNN function class has more flexibility than the bilinear one. When scoring novel tuples, many of which may be highly noisy, it appears that the constrained structure of the Bilinear AVG model makes it more robust to the noise. 7.1 Generating Tuples From ConceptNet In order to get new tuples, we automatically modify existing ConceptNet tuples. We take an existing tuple and randomly change one of the three fields (t1, t2, or R), ensuring that the result is not a tuple existing in ConceptNet. We then score these tuples using MaxSim and the Bilinear AVG model and analyze the results. 7.2 Generating Tuples from Wikipedia We also propose a simple method to extract candidate tuples from raw text. We first run the Stanford part-of-speech (POS) tagger (Toutanova et 5For the results in this section, we used the Bilinear AVG model that achieved 91.7 on TEST rather than the one augmented with additional data. t1, R, t2 score bus, ISA, public transportation 0.95 bus, ISA, public transit 0.90 bus, ISA, mass transit 0.79 bus, ATLOCATION, downtown area 0.98 bus, ATLOCATION, subway station 0.98 bus, ATLOCATION, city center 0.94 bus, CAPABLEOF, low cost 0.72 bus, CAPABLEOF, local service 0.65 bus, CAPABLEOF, train service 0.63 Table 6: Top Wikipedia tuples for 3 relations with t1 = bus, scored by Bilinear AVG model. al., 2003) on the terms in our ConceptNet training tuples. We enumerate the 50 most frequent term pair tag sequences for each relation. We do limited manual filtering of the frequent tag sequences, namely removing the sequences “DT NN NN” and “DT JJ NN” for the ISA relation. We do this in order to reduce noise in the extracted tuples. To focus on finding nontrivial tuples, for each relation we retain the top 15 POS tag sequences in which t1 or t2 has at least two words. We then run the tagger on sentences from English Wikipedia. We extract word sequence pairs corresponding to the relation POS tag sequence pairs, requiring that there be a gap of at least one word between the two terms. We then remove word sequence pairs in which one term is solely one of the following words: be, the, always, there, has, due, however. We also remove tuples containing words that are not in the vocabulary of our ConceptNet-trained embeddings. We require that one term does not include the other term. We create tuples consisting of the two terms and all possible relations that occur with the POS sequences of those two terms. Finally, we remove tuples that exactly match our ConceptNet training tuples. We use our trained Bilinear AVG model to score these tuples. We extract term pairs that occur within the same sentence because we hope that these will have higher precision than if we were to pair together arbitrary pairs. Some example tuples for t1 = bus are shown in Table 6. 1452 7.3 Manual Analysis of Novel Tuples To evaluate our models on newly generated tuples, we rank them using different models and manually score the high-ranking tuples for quality. We first randomly sampled 3000 tuples from each set of novel tuples. We do so due to the time requirements of the MaxSim baseline, which requires iterating through the entire training set for each candidate tuple. We score these sampled tuples using MaxSim and the Bilinear AVG model and rank them by their scores. The top 100 tuples under each ranking were given to an annotator who is a native English speaker. The annotator assigned a quality score to each tuple, using the same 0-4 annotation scheme as Speer et al. (2010): 0 (“Doesn’t make sense”), 1 (“Not true”), 2 (“Opinion/Don’t know”), 3 (“Sometimes true”), and 4 (“Generally true”). We report the average quality score across each set of 100 tuples. The results are shown in Table 7. To calibrate the scores, we also gave two samples of ConceptNet (CN) tuples to the annotator: a sample of 100 high-confidence tuples (first row) and a sample of 100 medium-confidence tuples (second row). We find the high-confidence tuples to be of high quality, recording an average of 3.68, though the medium-confidence tuples drop to 3.14. The next two rows show the quality scores of the MaxSim baseline and the Bilinear AVG model. The latter outperforms the baseline and matches the quality of the medium-confidence ConceptNet tuples. Since our novel tuples are not contained in ConceptNet, this result suggests that our model can be used to add medium-confidence tuples to ConceptNet. The novel Wikipedia tuples (top 100 tuples ranked by Bilinear AVG model) had a lower quality score (2.78), but this is to be expected due to the difference in domain. Since Wikipedia contains a wide variety of text, we found the novel tuples to be noisier than those from ConceptNet. Still, we are encouraged that on average the tuples are judged to be close to “sometimes true.” 7.4 Text Analysis with Commonsense Tuples We note that our method of tuple extraction and scoring could be used as an aid in applications that require sentence understanding. Two example sentences are shown in Table 8, along with the top tuples extracted and scored using our method. The tuples capture general knowledge about phrases tuples quality high-confidence CN tuples 3.68 medium-confidence CN tuples 3.14 novel CN tuples, ranked by MaxSim 2.74 novel CN tuples, ranked by Bilinear AVG 3.20 novel Wiki tuples, ranked by Bilinear AVG 2.78 Table 7: Average quality scores from manual evaluation of novel tuples. Each row corresponds to a different set of tuples. See text for details. After nine years of primary school, students can go to the high school or to an educational institution. t1, R, t2 score school, HASPROPERTY, educational 0.89 school, ISA, educational institution 0.80 school, ISA, institution 0.78 school, HASPROPERTY, high 0.77 high school, ISA, institution 0.71 On March 14, 1964, Ruby was convicted of murder with malice, for which he received a death sentence. t1, R, t2 score murder, CAUSES, death∗ 1.00 murder, CAUSES, death sentence 0.86 murder, HASSUBEVENT, death 0.84 murder, CAPABLEOF, death 0.51 Table 8: Top ranked tuples extracted from two example sentences and scored by Bilinear AVG model. ∗= contained in ConceptNet. contained in the sentence, rather than necessarily indicating what the sentence means. This procedure could provide relevant commonsense knowledge for a downstream application that seeks to understand the sentence. We leave further investigation of this idea to future work. 8 Conclusion We proposed methods to augment curated commonsense resources using techniques from knowledge base completion. By scoring novel tuples, we showed how we can increase the applicability of the knowledge contained in ConceptNet. In future work, we will explore how to use our model to improve downstream NLP tasks, and consider applying our methods to other knowledge bases. We have released all of our resources—code, data, and trained models—to the research community.6 Acknowledgments We thank the anonymous reviewers, John Wieting, and Luke Zettlemoyer. 6Available at http://ttic.uchicago.edu/ ˜kgimpel/commonsense.html. 1453 References Basant Agarwal, Namita Mittal, Pooja Bansal, and Sonal Garg. 2015. Sentiment analysis using common-sense and context information. Comp. Int. and Neurosc. Gabor Angeli and Christopher Manning. 2013. Philosophers are mortal: Inferring the truth of unseen facts. In Proc. of CoNLL. Gabor Angeli and D. Christopher Manning. 2014. NaturalLI: Natural logic inference for common sense reasoning. In Proc. of EMNLP. Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The Berkeley FrameNet project. In Proc. of COLING. Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Proc. of ACL. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proc. of the ACM SIGMOD International Conference on Management of Data. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in NIPS. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. In Proc. of EMNLP. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014b. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2). Erik Cambria, Daniel Olsher, and Kenneth Kwok. 2012. Sentic activation: A two-level affective common sense reasoning framework. In Proc. of AAAI. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In Proc. of AAAI. Luigi Di Caro, Alice Ruggeri, Loredana Cupi, and Guido Boella. 2015. Common-sense knowledge for natural language understanding: Experiments in unsupervised and supervised settings. In Proc. of AI*IA. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. of EMNLP. Alberto Garc´ıa-Dur´an, Antoine Bordes, and Nicolas Usunier. 2014. Effective blending of two and threeway interactions for modeling multi-relational data. In Machine Learning and Knowledge Discovery in Databases. Matt Gardner, Partha Pratim Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incorporating vector space similarity in random walk inference over knowledge bases. In Proc. of EMNLP. Jonathan Gordon and Lenhart K Schubert. 2012. Using textual patterns to learn expected event frequencies. In Proc. of Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proc. of Workshop on Automated Knowledge Base Construction. Jonathan Gordon, Benjamin Van Durme, and Lenhart K Schubert. 2010. Learning from the web: Extracting general world knowledge from noisy text. In Collaboratively-Built Knowledge Sources and AI. Jonathan Gordon. 2014. Inferential Commonsense Knowledge from Text. Ph.D. thesis, University of Rochester. Kelvin Gu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proc. of EMNLP. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proc. of CIKM. Rodolphe Jenatton, Nicolas L. Roux, Antoine Bordes, and Guillaume R Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in NIPS. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proc. of ACL. Ni Lao, Tom Mitchell, and William W Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proc. of EMNLP. Douglas B Lenat and Ramanathan V Guha. 1989. Building large knowledge-based systems: representation and inference in the Cyc project. AddisonWesley Longman Publishing Co., Inc. 1454 Henry Lieberman, Alexander Faaborg, Waseem Daher, and Jos´e Espinosa. 2005. How to wreck a nice beach you sing calm incense. In Proc. of IUI. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proc. of EMNLP. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in NIPS. George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11). Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proc of ACL. Arvind Neelakantan and Ming-Wei Chang. 2015. Inferring missing entity type instances for knowledge base completion: New dataset and methods. In Proc. of NAACL. Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In Proc. of ACL. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proc. of ICML. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing YAGO: Scalable machine learning for linked data. In Proc. of WWW. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proc. of EMNLP. Sebastian Riedel, Limin Yao, Andrew McCallum, and M. Benjamin Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proc. of NAACL-HLT. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in NIPS. Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in ConceptNet 5. In Proc. of LREC. Robert Speer, Catherine Havasi, and Henry Lieberman. 2008. AnalogySpace: Reducing the dimensionality of common sense knowledge. In Proc. of AAAI. Robert Speer, Catherine Havasi, and Harshit Surana. 2010. Using verbosity: Common sense data from games with a purpose. In Proc. of FLAIRS. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proc. of NAACL-HLT. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proc. of EMNLP. Luis von Ahn, Mihir Kedia, and Manuel Blum. 2006. Verbosity: a game for collecting common-sense facts. In Proc. of CHI. Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search-based question answering. In Proc. of WWW. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Dan Roth. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the ACL. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In Proc. of ICLR. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2013. Universal schema for entity type prediction. In Proc. of Workshop on Automated Knowledge Base Construction. 1455
2016
137
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1456–1465, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Simpler Context-Dependent Logical Forms via Model Projections Reginald Long Stanford University [email protected] Panupong Pasupat Stanford University [email protected] Percy Liang Stanford University [email protected] Abstract We consider the task of learning a contextdependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new contextdependent semantic parsing datasets, and develop a new left-to-right parser. 1 Introduction Suppose we are only told that a piece of text (a command) in some context (state of the world) has some denotation (the effect of the command)—see Figure 1 for an example. How can we build a system to learn from examples like these with no initial knowledge about what any of the words mean? We start with the classic paradigm of training semantic parsers that map utterances to logical forms, which are executed to produce the denotation (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Zettlemoyer and Collins, 2009; Kwiatkowski et al., 2010). More recent work learns directly from denotations (Clarke et al., 2010; Liang, 2013; Berant et al., 2013; Artzi and Zettlemoyer, 2013), but in this setting, a constant struggle is to contain the exponential explosion of possible logical forms. With no initial lexicon and longer contextdependent texts, our situation is exacerbated. Context: Text: Pour the last green beaker into beaker 2. Then into the first beaker. Mix it. Denotation: Figure 1: Our task is to learn to map a piece of text in some context to a denotation. An example from the ALCHEMY dataset is shown. In this paper, we ask: what intermediate logical form is suitable for modeling this mapping? In this paper, we propose projecting a full semantic parsing model onto simpler models over equivalence classes of logical form derivations. As illustrated in Figure 2, we consider the following sequence of models: • Model A: our full model that derives logical forms (e.g., in Figure 1, the last utterance maps to mix(args[1][1])) compositionally from the text so that spans of the utterance (e.g., “it”) align to parts of the logical form (e.g., args[1][1], which retrieves an argument from a previous logical form). This is based on standard semantic parsing (e.g., Zettlemoyer and Collins (2005)). • Model B: collapse all derivations with the same logical form; we map utterances to full logical forms, but without an alignment between the utterance and logical forms. This “floating” approach was used in Pasupat and Liang (2015) and Wang et al. (2015). • Model C: further collapse all logical forms whose top-level arguments have the same denotation. In other words, we map utterances 1456 Mix it Model A mix args[1][1] mix(args[1][1]) Model B mix(args[1][1]) Mix it Model C Mix it mix args[1][1] mix(args[1][1]) Mix it mix pos(2) mix(pos(2)) Mix it mix pos(2) mix(pos(2)) mix(pos(2)) Mix it mix(beaker2) Mix it Figure 2: Derivations generated for the last utterance in Figure 1. All derivations above execute to mix(beaker2). Model A generates anchored logical forms (derivations) where words are aligned to predicates, which leads to multiple derivations with the same logical form. Model B discards these alignments, and Model C collapses the arguments of the logical forms to denotations. to flat logical forms (e.g., mix(beaker2)), where the arguments of the top-level predicate are objects in the world. This model is in the spirit of Yao et al. (2014) and Bordes et al. (2014), who directly predicted concrete paths in a knowledge graph for question answering. Model A excels at credit assignment: the latent derivation explains how parts of the logical form are triggered by parts of the utterance. The price is an unmanageably large search space, given that we do not have a seed lexicon. At the other end, Model C only considers a small set of logical forms, but the mapping from text to the correct logical form is more complex and harder to model. We collected three new context-dependent semantic parsing datasets using Amazon Mechanical Turk: ALCHEMY (Figure 1), SCENE (Figure 3), and TANGRAMS (Figure 4). Along the way, we develop a new parser which processes utterances left-to-right but can construct logical forms without an explicit alignment. Our empirical findings are as follows: First, Model C is surprisingly effective, mostly surpassing the other two given bounded computational resources (a fixed beam size). Second, on a synthetic dataset, with infinite beam, Model A outperforms the other two models. Third, we can bootstrap up to Model A from the projected models with finite beam. Context: Text: A man in a red shirt and orange hat leaves to the right, leaving behind a man in a blue shirt in the middle. He takes a step to the left. Denotation: Figure 3: SCENE dataset: Each person has a shirt of some color and a hat of some color. They enter, leave, move around on a stage, and trade hats. Context: Text: Delete the second figure. Bring it back as the first figure. Denotation: Figure 4: TANGRAMS dataset: One can add figures, remove figures, and swap the position of figures. All the figures slide to the left. 2 Task In this section, we formalize the task and describe the new datasets we created for the task. 2.1 Setup First, we will define the context-dependent semantic parsing task. Define w0 as the initial world state, which consists of a set of entities (beakers in ALCHEMY) and properties (location, color(s), and amount filled). The text x is a sequence of utterances x1, . . . , xL. For each utterance xi (e.g., “mix”), we have a latent logical form zi (e.g., mix(args[1][2])). Define the context ci = (w0, z1:i−1) to include the initial world state w0 and the history of past logical forms z1:i−1. Each logical form zi is executed on the context ci to produce the next state: wi = Exec(ci, zi) for each i = 1, . . . , L. Overloading notation, we write wL = Exec(w0, z), where z = (z1, . . . , zL). The learning problem is: given a set of training examples {(w0, x, wL)}, learn a mapping from the text x to logical forms z = (z1, . . . , zL) that produces the correct final state (wL = Exec(w0, z)). 1457 Dataset # examples # train # test words/example utterances SCENE 4402 3363 1039 56.2 “then one more”, “he moves back” ALCHEMY 4560 3661 899 39.9 “mix”, “throw the rest out” TANGRAMS 4989 4189 800 27.2 “undo”, “replace it”, “take it away” Table 1: We collected three datasets. The number of examples, train/test split, number of tokens per example, along with interesting phenomena are shown for each dataset. 2.2 Datasets We created three new context-dependent datasets, ALCHEMY, SCENE, and TANGRAMS (see Table 1 for a summary), which aim to capture a diverse set of context-dependent linguistic phenomena such as ellipsis (e.g., “mix” in ALCHEMY), anaphora on entities (e.g., “he” in SCENE), and anaphora on actions (e.g., “repeat step 3”, “bring it back” in TANGRAMS). For each dataset, we have a set of properties and actions. In ALCHEMY, properties are color, and amount; actions are pour, drain, and mix. In SCENE, properties are hat-color and shirt-color; actions are enter, leave, move, and trade-hats. In TANGRAMS, there is one property (shape), and actions are add, remove, and swap. In addition, we include the position property (pos) in each dataset. Each example has L = 5 utterances, each denoting some transformation of the world state. Our datasets are unique in that they are grounded to a world state and have rich linguistic context-dependence. In the context-dependent ATIS dataset (Dahl et al., 1994) used by Zettlemoyer and Collins (2009), logical forms of utterances depend on previous logical forms, though there is no world state and the linguistic phenomena is limited to nominal references. In the map navigation dataset (Chen and Mooney, 2011), used by Artzi and Zettlemoyer (2013), utterances only reference the current world state. Vlachos and Clark (2014) released a corpus of annotated dialogues, which has interesting linguistic contextdependence, but there is no world state. Data collection. Our strategy was to automatically generate sequences of world states and ask Amazon Mechanical Turk (AMT) workers to describe the successive transformations. Specifically, we started with a random world state w0. For each i = 1, . . . , L, we sample a valid action and argument (e.g., pour(beaker1, beaker2)). To encourage context-dependent descriptions, we upweight recently used actions and arguments (e.g., the next action is more like to be drain(beaker2) rather than drain(beaker5)). Next, we presented an AMT worker with states w0, . . . , wL and asked the worker to write a description in between each pair of successive states. In initial experiments, we found it rather nontrivial to obtain interesting linguistic contextdependence in these micro-domains: often a context-independent utterance such as “beaker 2” is just clearer and not much longer than a possibly ambiguous “it”. We modified the domains to encourage more context. For example, in SCENE, we removed any visual indication of absolute position and allowed people to only move next to other people. This way, workers would say “to the left of the man in the red hat” rather than “to position 2”. 3 Model We now describe Model A, our full contextdependent semantic parsing model. First, let Z denote the set of candidate logical forms (e.g., pour(color(green),color(red))). Each logical form consists of a top-level action with arguments, which are either primitive values (green, 3, etc.), or composed via selection and superlative operations. See Table 2 for a full description. One notable feature of the logical forms is the context dependency: for example, given some context (w0, z1:4), the predicate actions[2] refers to the action of z2 and args[2][1] refers to first argument of z2.1 We use the term anchored logical forms (a.k.a. derivations) to refer to logical forms augmented with alignments between sub-logical forms of zi and spans of the utterance xi. In the example above, color(green) might align with “green beaker” from Figure 1; see Figure 2 for another example. 1These special predicates play the role of references in Zettlemoyer and Collins (2009). They perform contextindependent parsing and resolve references, whereas we resolve them jointly while parsing. 1458 Property[p] Value[v] ⇒Set[p(v)] all entities whose property p is v Set[s] Property[p] ⇒Value[argmin/argmax(s, p)] element in s with smallest/largest p Set[s] Int[i] ⇒Value[s[i]] i-th element of s Action[a] Value[v1] Value[v2] ⇒Root[a(v1, v2)] top-level action applied to arguments v1, v2 Table 2: Grammar that defines the space of candidate logical forms. Values include numbers, colors, as well as special tokens args[i][j] (for all i ∈{1, . . . , L} and j ∈{1, 2}) that refer to the j-th argument used in the i-th logical form. Actions include the fixed domain-specific set plus special tokens actions[i] (for all i ∈{1, . . . , L}), which refers to the i-th action in the context. Derivation condition Example (F1) zi contains predicate r (zi contains predicate pour, “pour”) (F2) property p of zi.bj is y (color of arg 1 is green, “green”) (F3) action zi.a is a and property p of zi.yj is y (action is pour and pos of arg 2 is 2, “pour, 2”) (F4) properties p of zi.v1 is y and p′ of zi.v2 is y′ (color of arg 1 is green and pos of arg 2 is 2, “first green, 2”) (F5) arg zi.vj is one of zi−1’s args (arg reused, “it”) (F6) action zi.a = zi−1.a (action reused, “pour”) (F7) properties p of zi.yj is y and p′ of zi−1.yk is y′ (pos of arg 1 is 2 and pos of prev. arg 2 is 2, “then”) (F8) t1 < s2 spans don’t overlap Table 3: Features φ(xi, ci, zi) for Model A: The left hand side describes conditions under which the system fires indicator features, and right hand side shows sample features for each condition. For each derivation condition (F1)–(F7), we conjoin the condition with the span of the utterance that the referenced actions and arguments align to. For condition (F8), we just fire the indicator by itself. Log-linear model. We place a conditional distribution over anchored logical forms zi ∈ Z given an utterance xi and context ci = (w0, z1:i−1), which consists of the initial world state w0 and the history of past logical forms z1:i−1. We use a standard log-linear model: pθ(zi | xi, ci) ∝exp(φ(xi, ci, zi) · θ), (1) where φ is the feature mapping and θ is the parameter vector (to be learned). Chaining these distributions together, we get a distribution over a sequence of logical forms z = (z1, . . . , zL) given the whole text x: pθ(z | x, w0) = L Y i=1 pθ(zi | xi, (w0, z1:i−1)). (2) Features. Our feature mapping φ consists of two types of indicators: 1. For each derivation, we fire features based on the structure of the logical form/spans. 2. For each span s (e.g., “green beaker”) aligned to a sub-logical form z (e.g., color(green)), we fire features on unigrams, bigrams, and trigrams inside s conjoined with various conditions of z. The exact features given in Table 3, references the first two utterances of Figure 1 and the associated logical forms below: x1 = “Pour the last green beaker into beaker 2.” z1 = pour(argmin(color(green),pos),pos(2)) x2 = “Then into the first beaker.” z2 = actions[1](args[1][2],pos(3)). We describe the notation we use for Table 3, restricting our discussion to actions that have two or fewer arguments. Our featurization scheme, however, generalizes to an arbitrary number of arguments. Given a logical form zi, let zi.a be its action and (zi.b1, zi.b2) be its arguments (e.g., color(green)). The first and second arguments are anchored over spans [s1, t1] and [s2, t2], respectively. Each argument zi.bj has a corresponding value zi.vj (e.g., beaker1), obtained by executing zi.bj on the context ci. Finally, let j, k ∈{1, 2} be indices of the arguments. For example, we would label the constituent parts of z1 (defined above) as follows: • z1.a = pour • z1.b1 = argmin(color(green),pos) • z1.v1 = beaker3 • z1.b2 = pos(2) • z1.v2 = beaker2 1459 4 Left-to-right parsing We describe a new parser suitable for learning from denotations in the context-dependent setting. Like a shift-reduce parser, we proceed left to right, but each shift operation advances an entire utterance rather than one word. We then sit on the utterance for a while, performing a sequence of build operations, which either combine two logical forms on the stack (like the reduce operation) or generate fresh logical forms, similar to what is done in the floating parser of Pasupat and Liang (2015). Our parser has two desirable properties: First, proceeding left-to-right allows us to build and score logical forms zi that depend on the world state wi−1, which is a function of the previous logical forms. Note that wi−1 is a random variable in our setting, whereas it is fixed in Zettlemoyer and Collins (2009). Second, the build operation allows us the flexibility to handle ellipsis (e.g., “Mix.”) and anaphora on full logical forms (e.g., “Do it again.”), where there’s not a clear alignment between the words and the predicates generated. The parser transitions through a sequence of hypotheses. Each hypothesis is h = (i, b, σ), where i is the index of the current utterance, where b is the number of predicates constructed on utterance xi, and σ is a stack (list) of logical forms. The stack includes both the previous logical forms z1:i−1 and fragments of logical forms built on the current utterance. When processing a particular hypothesis, the parser can choose to perform either the shift or build operation: Shift: The parser moves to the next utterance by incrementing the utterance index i and resetting b, which transitions a hypothesis from (i, b, σ) to (i + 1, 0, σ). Build: The parser creates a new logical form by combining zero or more logical forms on the stack. There are four types of build operations: 1. Create a predicate out of thin air (e.g., args[1][1] in Figure 5). This is useful when the utterance does not explicitly reference the arguments or action. For example, in Figure 5, we are able to generate the logical form args[1][1] in the presence of ellipsis. 2. Create a predicate anchored to some span of the utterance (e.g., actions[1] anchored to “Repeat”). This allows us to do credit assignment and capture which part of the utterance explains which part of the logical form. 3. Pop z from the stack σ and push z′ onto σ, where z′ is created by applying a rule in Table 2 to z. 4. Pop z, z′ from the stack σ and push z′′ onto σ, where z′′ is created by applying a rule in Table 2 to z, z′ (e.g., actions[1](args[1] [1]) by the top-level root rule). The build step stops once a maximum number of predicates B have been constructed or when the top-level rule is applied. We have so far described the search space over logical forms. In practice, we keep a beam of the K hypotheses with the highest score under the current log-linear model. 5 Model Projections Model A is ambitious, as it tries to learn from scratch how each word aligns to part of the logical form. For example, when Model A parses “Mix it”, one derivation will correctly align “Mix” to mix, but others will align “Mix” to args[1] [1], “Mix” to pos(2), and so on (Figure 2). As we do not assume a seed lexicon that could map “Mix” to mix, the set of anchored logical forms is exponentially large. For example, parsing just the first sentence of Figure 1 would generate 1,216,140 intermediate anchored logical forms. How can we reduce the search space? The key is that the space of logical forms is much smaller than the space of anchored logical forms. Even though both grow exponentially, dealing directly with logical forms allows us to generate pour without the combinatorial choice over alignments. We thus define Model B over the space of these logical forms. Figure 2 shows that the two anchored logical forms, which are treated differently in Model A are collapsed in Model B. This dramatically reduces the search space; parsing the first sentence of Figure 1 generates 7,047 intermediate logical forms. We can go further and notice that many compositional logical forms reduce to the same flat logical form if we evaluate all the arguments. For example, in Figure 2, mix(args[1] [1]) and mix(pos(2)) are equivalent to mix(beaker2). We define Model C to be the 1460 Delete the second figure. Repeat. delete 2 pos(2) actions[1](args[1]) delete(pos(2)) delete(pos(2)) actions[1](args[1][1]) actions[1] args[1][1] Delete the second figure. Repeat. delete 2 pos(2) args[1][1] actions[1] delete(pos(2)) delete(pos(2)) actions[1] args[1][1] Delete the second figure. Repeat. delete 2 pos(2) actions[1] delete(pos(2)) delete(pos(2)) actions[1] Delete the second figure. Repeat. delete 2 pos(2) delete(pos(2)) delete(pos(2)) (1) (2) (3) (4) Figure 5: Suppose we have already constructed delete(pos(2)) for “Delete the second figure.” Continuing, we shift the utterance “Repeat”. Then, we build action[1] aligned to the word “Repeat.” followed by args[1][1], which is unaligned. Finally, we combine the two logical forms. space of these flat logical forms which consist of a top-level action plus primitive arguments. Using Model C, parsing the first sentence of Figure 1 generates only 349 intermediate logical forms. Note that in the context-dependent setting, the number of flat logical forms (Model C) still increases exponentially with the number of utterances, but it is an overwhelming improvement over Model A. Furthermore, unlike other forms of relaxation, we are still generating logical forms that can express any denotation as before. The gains from Model B to Model C hinge on the fact that in our world, the number of denotations is much smaller than the number of logical forms. Projecting the features. While we have defined the space over logical forms for Models B and C, we still need to define a distribution over these spaces to to complete the picture. To do this, we propose projecting the features of the log-linear model (1). Define ΠA→B to be a map from a anchored logical form zA (e.g., mix(pos(2) ) aligned to “mix”) to an unanchored one zB (e.g., mix(pos(2))), and define ΠB→C to be a map from zB to the flat logical form zC (e.g., mix(beaker2)). We construct a log-linear model for Model B by constructing features φ(zB) (omitting the dependence on xi, ci for convenience) based on the Model A features φ(zA). Specifically, φ(zB) is the component-wise maximum of φ(zA) over all zA that project down to zB; φ(zC) is defined similarly: φ(zB) def = max{φ(zA) : ΠA→B(zA) = zB}, (3) φ(zC) def = max{φ(zB) : ΠB→C(zB) = zC}. (4) Concretely, Model B’s features include indicator features over LF conditions in Table 3 conjoined with every n-gram of the entire utterance, as there is no alignment. This is similar to the model of Pasupat and Liang (2015). Note that most of the derivation conditions (F2)–(F7) already depend on properties of the denotations of the arguments, so in Model C, we can directly reason over the space of flat logical forms zC (e.g., mix(beaker2)) rather than explicitly computing the max over more complex logical forms zB (e.g., mix(color(red))). Expressivity. In going from Model A to Model C, we gain in computational efficiency, but we lose in modeling expressivity. For example, for “second green beaker” in Figure 1, instead of predicting color(green)[2], we would have to predict beaker3, which is not easily explained by the words “second green beaker” using the simple features in Table 3. At the same time, we found that simple features can actually simulate some logical forms. For example, color(green) can be explained by the feature that looks at the color property of beaker3. Nailing color(green)[2], however, is not easy. Surprisingly, Model C can use a conjunction of features to express superlatives (e.g., argmax(color(red),pos)) by using one feature that places more mass on selecting objects that are red and another feature that places more mass on objects that have a greater position value. 6 Experiments Our experiments aim to explore the computationexpressivity tradeoff in going from Model A to Model B to Model C. We would expect that under the computational constraint of a finite beam size, Model A will be hurt the most, but with an 1461 Dataset Model 3-acc 3-ora 5-acc 5-ora ALCHEMY B 0.189 0.258 0.037 0.055 C 0.568 0.925 0.523 0.809 SCENE B 0.068 0.118 0.017 0.031 C 0.232 0.431 0.147 0.253 TANGRAMS B 0.649 0.910 0.276 0.513 C 0.567 0.899 0.272 0.698 Table 4: Test set accuracy and oracle accuracy for examples containing L = 3 and L = 5 utterances. Model C surpasses Model B in both accuracy and oracle on ALCHEMY and SCENE, whereas Model B does better in TANGRAMS. infinite beam, Model A should perform better. We evaluate all models on accuracy, the fraction of examples that a model predicts correctly. A predicted logical form z is deemed to be correct for an example (w0, x, wL) if the predicted logical form z executes to the correct final world state wL. We also measure the oracle accuracy, which is the fraction of examples where at least one z on the beam executes to wL. All experiments train for 6 iterations using AdaGrad (Duchi et al., 2010) and L1 regularization with a coefficient of 0.001. 6.1 Real data experiments Setup. We use a beam size of 500 within each utterance, and prune to the top 5 between utterances. For the first two iterations, Models B and C train on only the first utterance of each example (L = 1). In the remaining iterations, the models train on two utterance examples. We then evaluate on examples with L = 1, . . . , 5, which tests our models ability to extrapolate to longer texts. Accuracy with finite beam. We compare models B and C on the three real datasets for both L = 3 and L = 5 utterances (Model A was too expensive to use). Table 4 shows that on 5 utterance examples, the flatter Model C achieves an average accuracy of 20% higher than the more compositional Model B. Similarly, the average oracle accuracy is 39% higher. This suggests that (i) the correct logical form often falls off the beam for Model B due to a larger search space, and (ii) the expressivity of Model C is sufficient in many cases. On the other hand, Model B outperforms Model C on the TANGRAMS dataset. This happens for two reasons. The TANGRAMS dataset has the smallest search space, since all of the utterances refer to objects using position only. Additionally, many utterances reference logical forms that Model C is unable to express, such as “repeat the first step”, or “add it back”. Figure 6 shows how the models perform as the number of utterances per example varies. When the search space is small (fewer number of utterances), Model B outperforms or is competitive with Model C. However, as the search space increases (tighter computational constraints), Model C does increasingly better. Overall, both models perform worse as L increases, since to predict the final world state wL correctly, a model essentially needs to predict an entire sequence of logical forms z1, . . . , zL, and errors cascade. Furthermore, for larger L, the utterances tend to have richer context-dependence. 6.2 Artificial data experiments Setup. Due to the large search space, running model A on real data is impractical. In order feasibly evaluate Model A, we constructed an artificial dataset. The worlds are created using the procedure described in Section 2.2. We use a simple template to generate utterances (e.g., “drain 1 from the 2 green beaker”). To reduce the search space for Model A, we only allow actions (e.g., drain) to align to verbs and property values (e.g., green) to align to adjectives. Using these linguistic constraints provides a slightly optimistic assessment of Model A’s performance. We train on a dataset of 500 training examples and evaluate on 500 test examples. We repeat this procedure for varying beam sizes, from 40 to 260. The model only uses features (F1) through (F3). Accuracy under infinite beam. Since Model A is more expressive, we would expect it to be more powerful when we have no computational constraints. Figure 7 shows that this is indeed the case: When the beam size is greater than 250, all models attain an oracle of 1, and Model A outperforms Model B, which performs similarly to Model C. This is because the alignments provide a powerful signal for constructing the logical forms. Without alignments, Models B and C learn noisier features, and accuracy suffers accordingly. Bootstrapping. Model A performs the best with unconstrained computation, and Model C performs the best with constrained computation. Is there some way to bridge the two? Even though 1462 (a) ALCHEMY (b) SCENE (c) TANGRAMS Figure 6: Test results on our three datasets as we vary the number of utterances. The solid lines are the accuracy, and the dashed line are the oracles: With finite beam, Model C significantly outperforms Model B on ALCHEMY and SCENE, but is slightly worse on TANGRAMS. Figure 7: Test results on our artificial dataset with varying beam sizes. The solid lines are the accuracies, and the dashed line are the oracle accuracies. Model A is unable to learn anything with beam size < 240. However, for beam sizes larger than 240, Model A attains 100% accuracy. Model C does better than Models A and B when the beam size is small < 40, but otherwise performs comparably to Model B. Bootstrapping Model A using Model C parameters outperforms all of the other models and attains 100% even with smaller beams. Model C has limited expressivity, it can still learn to associate words like “green” with their corresponding predicate green. These should be useful for Model A too. To operationalize this, we first train Model C and use the parameters to initialize model A. Then we train Model A. Figure 7 shows that although Model A and C predict different logical forms, the initialization allows Model C to A to perform well in constrained beam settings. This bootstrapping delete the last figure. add it back. Text: Context: Model C: Model B: Denotation: z1=remove(pos(5)) z2=add(cat,1) z1=remove(pos(5)) z2=add(args[1][1],pos(args[1][1])) Figure 8: Predicted logical forms for this text: The logical form add takes a figure and position as input. Model B predicts the correct logical form. Model C does not understand that “back” refers to position 5, and adds the cat figure to position 1. model beam action argument context noise B 0.47 0.03 0.17 0.23 0.04 C 0.15 0.03 0.25 0.5 0.07 Table 5: Percentage of errors for Model B and C: Model B suffers predominantly from computation constraints, while Model C suffers predominantly from a lack of expressivity. works here because Model C is a projection of Model A, and thus they share the same features. 6.3 Error Analysis We randomly sampled 20 incorrect predictions on 3 utterance examples from each of the three real datasets for Model B and Model C. We categorized each prediction error into one of the following categories: (i) logical forms falling off the beam; (ii) choosing the wrong action (e.g., mapping “drain” to pour); (iii) choosing the wrong 1463 argument due to misunderstanding the description (e.g., mapping “third beaker” to pos(1)); (iv) choosing the wrong action or argument due to misunderstanding of context (see Figure 8); (v) noise in the dataset. Table 5 shows the fraction of each error category. 7 Related Work and Discussion Context-dependent semantic parsing. Utterances can depend on either linguistic context or world state context. Zettlemoyer and Collins (2009) developed a model that handles references to previous logical forms; Artzi and Zettlemoyer (2013) developed a model that handles references to the current world state. Our system considers both types of context, handling linguistic phenomena such as ellipsis and anaphora that reference both previous world states and logical forms. Logical form generation. Traditional semantic parsers generate logical forms by aligning each part of the logical form to the utterance (Zelle and Mooney, 1996; Wong and Mooney, 2007; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011). In general, such systems rely on a lexicon, which can be hand-engineered, extracted (Cai and Yates, 2013; Berant et al., 2013), or automatically learned from annotated logical forms (Kwiatkowski et al., 2010; Chen, 2012). Recent work on learning from denotations has moved away from anchored logical forms. Pasupat and Liang (2014) and Wang et al. (2015) proposed generating logical forms without alignments, similar to our Model B. Yao et al. (2014) and Bordes et al. (2014) have explored predicting paths in a knowledge graph directly, which is similar to the flat logical forms of Model C. Relaxation and bootstrapping. The idea of first training a simpler model in order to work up to a more complex one has been explored other contexts. In the unsupervised learning of generative models, bootstrapping can help escape local optima and provide helpful regularization (Och and Ney, 2003; Liang et al., 2009). When it is difficult to even find one logical form that reaches the denotation, one can use the relaxation technique of Steinhardt and Liang (2015). Recall that projecting from Model A to C creates a more computationally tractable model at the cost of expressivity. However, this is because Model C used a linear model. One might imagine that a non-linear model would be able to recuperate some of the loss of expressivity. Indeed, Neelakantan et al. (2016) use recurrent neural networks attempt to perform logical operations. One could go one step further and bypass logical forms altogether, performing all the logical reasoning in a continuous space (Bowman et al., 2014; Weston et al., 2015; Guu et al., 2015; Reed and de Freitas, 2016). This certainly avoids the combinatorial explosion of logical forms in Model A, but could also present additional optimization challenges. It would be worth exploring this avenue to completely understand the computationexpressivity tradeoff. Reproducibility Our code, data, and experiments are available on CodaLab at https:// worksheets.codalab.org/worksheets/ 0xad3fc9f52f514e849b282a105b1e3f02/. Acknowledgments We thank the anonymous reviewers for their constructive feedback. The third author is supported by a Microsoft Research Faculty Fellowship. References Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL), 1:49–62. J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). A. Bordes, S. Chopra, and J. Weston. 2014. Question answering with subgraph embeddings. In Empirical Methods in Natural Language Processing (EMNLP). S. R. Bowman, C. Potts, and C. D. Manning. 2014. Can recursive neural tensor networks learn logical reasoning? In International Conference on Learning Representations (ICLR). Q. Cai and A. Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL). D. L. Chen and R. J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Association for the Advancement of Artificial Intelligence (AAAI), pages 859–865. 1464 D. L. Chen. 2012. Fast online lexicon learning for grounded language acquisition. In Association for Computational Linguistics (ACL). J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL), pages 18–27. D. A. Dahl, M. Bates, M. Brown, W. Fisher, K. Hunicke-Smith, D. Pallett, C. Pao, A. Rudnicky, and E. Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In Workshop on Human Language Technology, pages 43–48. J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT). K. Guu, J. Miller, and P. Liang. 2015. Traversing knowledge graphs in vector space. In Empirical Methods in Natural Language Processing (EMNLP). T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223–1233. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Empirical Methods in Natural Language Processing (EMNLP), pages 1512–1523. P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 91–99. P. Liang. 2013. Lambda dependency-based compositional semantics. arXiv. A. Neelakantan, Q. V. Le, and I. Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In International Conference on Learning Representations (ICLR). F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29:19–51. P. Pasupat and P. Liang. 2014. Zero-shot entity extraction from web pages. In Association for Computational Linguistics (ACL). P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). S. Reed and N. de Freitas. 2016. Neural programmerinterpreters. In International Conference on Learning Representations (ICLR). J. Steinhardt and P. Liang. 2015. Learning with relaxed supervision. In Advances in Neural Information Processing Systems (NIPS). A. Vlachos and S. Clark. 2014. A new corpus and imitation learning framework for context-dependent semantic parsing. Transactions of the Association for Computational Linguistics (TACL), 2:547–559. Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In Association for Computational Linguistics (ACL). J. Weston, A. Bordes, S. Chopra, and T. Mikolov. 2015. Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv. Y. W. Wong and R. J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL), pages 960–967. X. Yao, J. Berant, and B. Van-Durme. 2014. Freebase QA: Information extraction or semantic parsing. In Workshop on Semantic parsing. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658– 666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678–687. L. S. Zettlemoyer and M. Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). 1465
2016
138
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1466–1477, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Fast Unified Model for Parsing and Sentence Understanding Samuel R. Bowman1,2,3,∗ [email protected] Jon Gauthier2,3,4,∗ [email protected] Abhinav Rastogi3,5 [email protected] Raghav Gupta2,3,6 [email protected] Christopher D. Manning1,2,3,6 [email protected] Christopher Potts1,6 [email protected] 1Stanford Linguistics 2Stanford NLP Group 3Stanford AI Lab 4Stanford Symbolic Systems 5Stanford Electrical Engineering 6Stanford Computer Science Abstract Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suffer from two key technical problems that make them slow and unwieldy for large-scale NLP tasks: they usually operate on parsed sentences and they do not directly support batched computation. We address these issues by introducing the Stack-augmented Parser-Interpreter Neural Network (SPINN), which combines parsing and interpretation within a single tree-sequence hybrid model by integrating tree-structured sentence interpretation into the linear sequential structure of a shiftreduce parser. Our model supports batched computation for a speedup of up to 25× over other tree-structured models, and its integrated parser can operate on unparsed data with little loss in accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models. 1 Introduction A wide range of current models in NLP are built around a neural network component that produces vector representations of sentence meaning (e.g., Sutskever et al., 2014; Tai et al., 2015). This component, the sentence encoder, is generally formulated as a learned parametric function from a sequence of word vectors to a sentence vector, and this function can take a range of different forms. Common sentence encoders include sequencebased recurrent neural network models (RNNs, see Figure 1a) with Long Short-Term Memory (LSTM, ∗The first two authors contributed equally. the old cat ate ... the cat sat down ... (a) A conventional sequence-based RNN for two sentences. ... the old cat ate ate the old cat old cat cat old the ... the cat sat down sat down down sat the cat cat the (b) A conventional TreeRNN for two sentences. Figure 1: An illustration of two standard designs for sentence encoders. The TreeRNN, unlike the sequence-based RNN, requires a substantially different connection structure for each sentence, making batched computation impractical. Hochreiter and Schmidhuber, 1997), which accumulate information over the sentence sequentially; convolutional neural networks (Kalchbrenner et al., 2014; Zhang et al., 2015), which accumulate information using filters over short local sequences of words or characters; and tree-structured recursive neural networks (TreeRNNs, Goller and Küchler, 1996; Socher et al., 2011a, see Figure 1b), which propagate information up a binary parse tree. Of these, the TreeRNN appears to be the principled choice, since meaning in natural language sentences is known to be constructed recursively according to a tree structure (Dowty, 2007, i.a.). TreeRNNs have shown promise (Tai et al., 2015; Li et al., 2015; Bowman et al., 2015b), but have 1466 buffer down sat stack cat the composition tracking transition reduce down sat the cat composition tracking transition shift down sat the cat tracking (a) The SPINN model unrolled for two transitions during the processing of the sentence the cat sat down. ‘Tracking’, ‘transition’, and ‘composition’ are neural network layers. Gray arrows indicate connections which are blocked by a gating function. buffer stack t = 0 down sat cat the shift t = 1 down sat cat the shift t = 2 down sat cat the reduce t = 3 down sat the cat shift t = 4 down sat the cat shift t = 5 down sat the cat reduce t = 6 sat down the cat reduce t = 7 = T (the cat) (sat down) output to model for semantic task (b) The fully unrolled SPINN for the cat sat down, with neural network layers omitted for clarity. Figure 2: Two views of the Stack-augmented Parser-Interpreter Neural Network (SPINN). largely been overlooked in favor of sequencebased RNNs because of their incompatibility with batched computation and their reliance on external parsers. Batched computation—performing synchronized computation across many examples at once—yields order-of-magnitude improvements in model run time, and is crucial in enabling neural networks to be trained efficiently on large datasets. Because TreeRNNs use a different model structure for each sentence, as in Figure 1, efficient batching is impossible in standard implementations. Partly to address efficiency problems, standard TreeRNN models commonly only operate on sentences that have already been processed by a syntactic parser, which slows and complicates the use of these models at test time for most applications. This paper introduces a new model to address both these issues: the Stack-augmented ParserInterpreter Neural Network, or SPINN, shown in Figure 2. SPINN executes the computations of a tree-structured model in a linearized sequence, and can incorporate a neural network parser that produces the required parse structure on the fly. This design improves upon the TreeRNN architecture in three ways: At test time, it can simultaneously parse and interpret unparsed sentences, removing the dependence on an external parser at nearly no additional computational cost. Secondly, it supports batched computation for both parsed and unparsed sentences, yielding dramatic speedups over standard TreeRNNs. Finally, it supports a novel tree-sequence hybrid architecture for handling local linear context in sentence interpretation. This model is a basically plausible model of human sentence processing and yields substantial accuracy gains over pure sequence- or tree-based models. WeevaluateSPINNonthe Stanford NaturalLanguage Inference entailment task (SNLI, Bowman et al., 2015a), and find that it significantly outperforms other sentence-encoding-based models, even with a relatively simple and underpowered implementation of the built-in parser. We also find that SPINN yields speed increases of up to 25× over a standard TreeRNN implementation. 2 Related work There is a fairly long history of work on building neural network-based parsers that use the core operations and data structures from transition-based parsing, of which shift-reduce parsing is a variant (Henderson, 2004; Emami and Jelinek, 2005; Titov and Henderson, 2010; Chen and Manning, 2014; Buys and Blunsom, 2015; Dyer et al., 2015; Kiperwasser and Goldberg, 2016). In addition, there has been recent work proposing models designed primarily for generative language modeling tasks that use this architecture as well (Zhang et al., 2016; Dyer et al., 2016). To our knowledge, SPINN is the first model to use this architecture for the purpose of sentence interpretation, rather than parsing 1467 or generation. Socher et al. (2011a,b) present versions of the TreeRNN model which are capable of operating over unparsed inputs. However, these methods require an expensive search process at test time. Our model presents a much faster alternative approach. 3 Our model: SPINN 3.1 Background: Shift-reduce parsing SPINN is inspired by shift-reduce parsing (Aho and Ullman, 1972), which builds a tree structure over a sequence (e.g., a natural language sentence) by a single left-to-right scan over its tokens. The formalism is widely used in natural language parsing (e.g., Shieber, 1983; Nivre, 2003). A shift-reduce parser accepts a sequence of input tokens x = (x0, . . ., xN−1) and consumes transitions a = (a0, . . ., aT−1), where each at ∈ {shift, reduce} specifies one step of the parsing process. In general a parser may also generate these transitions on the fly as it reads the tokens. It proceeds left-to-right through a transition sequence, combining the input tokens x incrementally into a tree structure. For any binary-branching tree structure over N words, this requires T = 2N −1 transitions through a total of T + 1 states. The parser uses two auxiliary data structures: a stack S of partially completed subtrees and a buffer B of tokens yet to be parsed. The parser is initialized with the stack empty and the buffer containing the tokens x of the sentence in order. Let ⟨S, B⟩= ⟨∅, x⟩denote this starting state. It next proceeds through the transition sequence, where each transition at selects one of the two following operations. Below, the | symbol denotes the cons (concatenation) operator. We arbitrarily choose to always cons on the left in the notation below. shift: ⟨S, x | B⟩→⟨x | S, B⟩. This operation pops an element from the buffer and pushes it on to the top of the stack. reduce: ⟨x | y | S, B⟩→⟨(x, y) | S, B⟩. This operation pops the top two elements from the stack, merges them, and pushes the result back on to the stack. 3.2 Composition and representation SPINN is based on a shift-reduce parser, but it is designed to produce a vector representation of a sentence as its output, rather than a tree as in standard shift-reduce parsing. It modifies the shiftreduce formalism by using fixed length vectors to represent each entry in the stack and the buffer. Correspondingly, its reduce operation combines two vector representations from the stack into another vector using a neural network function. The composition function When a reduce operation is performed, the vector representations of two tree nodes are popped offof the stack and fed into a composition function, which is a neural network function that produces a representation for a new tree node that is the parent of the two popped nodes. This new node is pushed on to the stack. The TreeLSTM composition function (Tai et al., 2015) generalizes the LSTM neural network layer to tree- rather than sequence-based inputs, and it shares with the LSTM the idea of representing intermediate states as a pair of an active state representation ⃗h and a memory representation ⃗c. Our version is formulated as:  ⃗i ⃗fl ⃗fr ⃗o ⃗g  =  σ σ σ σ tanh  *.. , Wcomp  ⃗h1 s ⃗h2 s ⃗e  + ⃗bcomp +// (1) ⃗c = ⃗fl ⊙⃗c 2 s + ⃗fr ⊙⃗c 1 s +⃗i ⊙⃗g (2) ⃗h = ⃗o ⊙⃗c (3) where σ is the sigmoid activation function, ⊙is the elementwise product, the pairs ⟨⃗h1 s, ⃗c 1 s ⟩and ⟨⃗h2 s, ⃗c 2 s ⟩ are the two input tree nodes popped offthe stack, and ⃗e is an optional vector-valued input argument which is either empty or comes from an external source like the tracking LSTM (see Section 3.3). The result of this function, the pair ⟨⃗h, ⃗c⟩, is placed back on the stack. Each vector-valued variable listed is of dimension D except ⃗e, of the independent dimension Dtracking. The stack and buffer The stack and the buffer are arrays of N elements each (for sentences of up to N words), with the two D-dimensional vectors ⃗h and ⃗c in each element. Word representations We use word representations based on the 300D vectors provided with GloVe (Pennington et al., 2014). We do not update these representations during training. Instead, we use a learned linear transformation to map each input word vector ⃗xGloVe into a vector pair ⟨⃗h, ⃗c⟩that is stored in the buffer: (4) "⃗h ⃗c # = Wwd⃗xGloVe + ⃗bwd 1468 3.3 The tracking LSTM In addition to the stack, buffer, and composition function, our full model includes an additional component: the tracking LSTM. This is a simple sequence-based LSTM RNN that operates in tandem with the model, taking inputs from the buffer and stack at each step. It is meant to maintain a low-resolution summary of the portion of the sentence that has been processed so far, which is used for two purposes: it supplies feature representations to the transition classifier, which allows the model to stand alone as a parser, and it additionally supplies a secondary input ⃗e to the composition function—see (1)—allowing context information to enter the construction of sentence meaning and forming what is effectively a tree-sequence hybrid model. The tracking LSTM’s inputs (yellow in Figure 2) are the top element of the buffer ⃗h1 b (which would be moved in a shift operation) and the top two elements of the stack ⃗h1 s and ⃗h2 s (which would be composed in a reduce operation). Why a tree-sequence hybrid? Lexical ambiguity is ubiquitous in natural language. Most words have multiple senses or meanings, and it is generally necessary to use the context in which a word occurs to determine which of its senses or meanings is meant in a given sentence. Even though TreeRNNs are more effective at composing meanings in principle, this ambiguity can give simpler sequence-based sentence-encoding models an advantage: when a sequence-based model first processes a word, it has direct access to a state vector that summarizes the left context of that word, which acts as a cue for disambiguation. In contrast, when a standard tree-structured model first processes a word, it only has access to the constituent that the word is merging with, which is often just a single additional word. Feeding a context representation from the tracking LSTM into the composition function is a simple and efficient way to mitigate this disadvantage of tree-structured models. Using left linear context to disambiguate is also a plausible model of human interpretation. It would be straightforward to augment SPINN to support the use of some amount of right-side context as well, but this would add complexity to the model that we think is largely unnecessary: humans are very effective at understanding the beginnings of sentences before having seen or heard the ends, suggesting that it is possible to get by without the unavailable right-side context. 3.4 Parsing: Predicting transitions For SPINN to operate on unparsed inputs, it needs to produce its own transition sequence a rather than relying on an external parser to supply it as part of the input. To do this, the model predicts at at each step using a simple two-way softmax classifier whose input is the state of the tracking LSTM: (5) ⃗pa = softmax(Wtrans⃗htracking + ⃗btrans) The above model is nearly the simplest viable implementation of a transition decision function. In contrast, the decision functions in state-of-the-art transition-based parsers tend to use significantly richer feature sets as inputs, including features containing information about several upcoming words on the buffer. The value ⃗htracking is a function of only the very top of the buffer and the top two stack elements at each timestep. At test time, the model uses whichever transition (i.e., shift or reduce) is assigned a higher (unnormalized) probability. The prediction function is trained to mimic the decisions of an external parser. These decisions are used as inputs to the model during training. For SNLI, we use the binary Stanford PCFG Parser parses that are included with the corpus. We did not find scheduled sampling (Bengio et al., 2015)—having the model use its own transition decisions sometimes at training time—to help. 3.5 Implementation issues Representing the stack efficiently A naïve implementation of SPINN needs to handle a size O(N) stack at each timestep, any element of which may be involved in later computations. A naïve backpropagation implementation would then require storing each of the O(N) stacks for a backward pass, leading to a per-example space requirement of O(NTD) floats. This requirement is prohibitively large for significant batch sizes or sentence lengths N. Such a naïve implementation would also require copying a largely unchanged stack at each timestep, since each shift or reduce operation writes only one new representation to the top of the stack. We propose a space-efficient stack representation inspired by the zipper technique (Huet, 1997) that we call thin stack. For each input sentence, we 1469 Algorithm 1 The thin stack algorithm 1: function Step(bufferTop, a, t, S, Q) 2: if a = shift then 3: S[t] := bufferTop 4: else if a = reduce then 5: right := S[Q.pop()] 6: left := S[Q.pop()] 7: S[t] := Compose(left, right) 8: Q.push(t) represent the stack with a single T × D matrix S. Each row S[t] (for 0 < t ≤T) represents the top of the actual stack at timestep t. At each timestep we can shift a new element onto the stack, or reduce the top two elements of the stack into a single element. To shift an element from the buffer to the top of the stack at timestep t, we simply write it into the location S[t]. In order to perform the reduce operation, we need to retrieve the top two elements of the actual stack. We maintain a queue Q of pointers into S which contains the row indices of S which are still present in the actual stack. The top two elements of the stack can be found by using the final two pointers in the queue Q. These retrieved elements are used to perform the reduce operation, which modifies Q to mark that some rows of S have now been replaced in the actual stack. Algorithm 1 describes the full mechanics of a stack feedforward in this compressed representation. It operates on the single T × D matrix S and a backpointer queue Q. Table 1 shows an example run. This stack representation requires substantially less space. It stores each element involved in the feedforward computation exactly once, meaning that this representation can still support efficient backpropagation. Furthermore, all of the updates to S and Q can be performed batched and in-place on a GPU, yielding substantial speed gains over both a more naïve SPINN implementation and a standard TreeRNN implementation. We describe speed results in Section 3.7. Preparing the data At training time, SPINN requires both a transition sequence a and a token sequence x as its inputs for each sentence. The token sequence is simply the words in the sentence in order. a can be obtained from any constituency parse for the sentence by first converting that parse into an unlabeled binary parse, then linearizing it (with the usual in-order traversal), then taking each t S[t] Qt at 0 shift 1 Spot 1 shift 2 sat 1 2 shift 3 down 1 2 3 reduce 4 (sat down) 1 4 reduce 5 (Spot (sat down)) 5 Table 1: The thin-stack algorithm operating on the input sequence x = (Spot, sat, down) and the transition sequence shown in the rightmost column. S[t] shows the top of the stack at each step t. The last two elements of Q (underlined) specify which rows t would be involved in a reduce operation at the next step. word token as a shift transition and each ‘)’ as a reduce transition, as here: Unlabeled binary parse: ( ( the cat ) ( sat down ) ) x: the, cat, sat, down a: shift, shift, reduce, shift, shift, reduce, reduce Handling variable sentence lengths For any sentence model to be trained with batched computation, it is necessary to pad or crop sentences to a fixed length. We fix this length at N = 25 words, longer than about 98% of sentences in SNLI. Transition sequences a are cropped at the left or padded at the left with shifts. Token sequences x are then cropped or padded with empty tokens at the left to match the number of shifts added or removed from a, and can then be padded with empty tokens at the right to meet the desired length N. 3.6 TreeRNN-equivalence Without the addition of the tracking LSTM, SPINN (in particular the SPINN-PI-NT variant, for parsed input, no tracking) is precisely equivalent to a conventional tree-structured neural network model in the function that it computes, and therefore it also has the same learning dynamics. In both, the representation of each sentence consists of the representations of the words combined recursively using a TreeRNN composition function (in our case, the TreeLSTM function). SPINN, however, is dramatically faster, and supports both integrated parsing and a novel approach to context through the tracking LSTM. 1470 32 64 128 256 512 1024 2048 0 5 10 15 20 25 Batch size Feedforward time (sec) Thin-stack GPU CPU (Irsoy and Cardie, 2014) RNN Figure 3: Feedforward speed comparison. 3.7 Inference speed In this section, we compare the test-time speed of our SPINN-PI-NT with an equivalent TreeRNN implemented in the conventional fashion and with a standard RNN sequence model. While the full models evaluated below are implemented and trained using Theano (Theano Development Team, 2016), which is reasonably efficient but not perfect for our model, we wish to compare well-optimized implementations of all three models. To do this, we reimplement the feedforward1 of SPINN-PI-NT and an LSTM RNN baseline in C++/CUDA, and compare that implementation with a CPU-based C++/Eigen TreeRNN implementation from Irsoy and Cardie (2014), which we modified to perform exactly the same computations as SPINN-PI-NT.2 TreeRNNs like this can only operate on a single example at a time and are thus poorly suited for GPU computation. Each model is restricted to run on sentences of 30 tokens or fewer. We fix the model dimension D and the word embedding dimension at 300. We run the CPU performance test on a 2.20 GHz 16-core Intel Xeon E5-2660 processor with hyperthreading enabled. We test our thin-stack implementation and the RNN model on an NVIDIA Titan X GPU. Figure 3 compares the sentence encoding speed of the three models on random input data. We observe a substantial difference in runtime between the CPU and thin-stack implementations that increases with batch size. With a large but practical 1We chose to reimplement and evaluate only the feedforward/inference pass, as inference speed is the relevant performance metric for most practical applications. 2The original code for Irsoy & Cardie’s model is available at https://github.com/oir/deep-recursive. Our optimized C++/CUDA models and the Theano source code for the full SPINN are available at https://github.com/ stanfordnlp/spinn. batch size of 512, the largest on which we tested the TreeRNN, our model is about 25× faster than the standard CPU implementation, and about 4× slower than the RNN baseline. Though this experiment only covers SPINNPI-NT, the results should be similar for the full SPINN model: most of the computation involved in running SPINN is involved in populating the buffer, applying the composition function, and manipulating the buffer and the stack, with the low-dimensional tracking and parsing components adding only a small additional load. 4 NLI Experiments We evaluate SPINN on the task of natural language inference (NLI, a.k.a. recognizing textual entailment, or RTE; Dagan et al., 2006). NLI is a sentence pair classification task, in which a model reads two sentences (a premise and a hypothesis), and outputs a judgment of entailment, contradiction, or neutral, reflecting the relationship between the meanings of the two sentences. Below is an example sentence pair and judgment from the SNLI corpus which we use in our experiments: Premise: Girl in a red coat, blue head wrap and jeans is making a snow angel. Hypothesis: A girl outside plays in the snow. Label: entailment SNLI is a corpus of 570k human-labeled pairs of scene descriptions like this one. We use the standard train–test split and ignore unlabeled examples, which leaves about 549k examples for training, 9,842 for development, and 9,824 for testing. SNLI labels are roughly balanced, with the most frequent label, entailment, making up 34.2% of the test set. Although NLI is framed as a simple three-way classification task, it is nonetheless an effective way of evaluating the ability of a model to extract broadly informative representations of sentence meaning. In order for a model to perform reliably well on NLI, it must be able to represent and reason with the core phenomena of natural language semantics, including quantification, coreference, scope, and several types of ambiguity. 4.1 Applying SPINN to SNLI Creating a sentence-pair classifier To classify an SNLI sentence pair, we run two copies of SPINN with shared parameters: one on the premise sentence and another on the hypothesis sentence. We then use their outputs (the ⃗h states at the top of each 1471 Param. Range Strategy RNN SP.-PI-NT SP.-PI SP. Initial LR 2 × 10−4–2 × 10−2 log 5 × 10−3 3 × 10−4 7 × 10−3 2 × 10−3 L2 regularization λ 8 × 10−7–3 × 10−5 log 4 × 10−6 3 × 10−6 2 × 10−5 3 × 10−5 Transition cost α 0.5–4.0 lin — — — 3.9 Embedding transformation dropout 80–95% lin — 83% 92% 86% Classifier MLP dropout 80–95% lin 94% 94% 93% 94% Tracking LSTM size Dtracking 24–128 log — — 61 79 Classifier MLP layers 1–3 lin 2 2 2 1 Table 2: Hyperparameter ranges and values. Range shows the hyperparameter ranges explored during random search. Strategy indicates whether sampling from the range was uniform, or log-uniform. Dropout parameters are expressed as keep rates rather than drop rates. stack at time t = T) to construct a feature vector ⃗xclassifier for the pair. This feature vector consists of the concatenation of these two sentence vectors, their difference, and their elementwise product (following Mou et al., 2016): (6) ⃗xclassifier =  ⃗hpremise ⃗hhypothesis ⃗hpremise −⃗hhypothesis ⃗hpremise ⊙⃗hhypothesis  This feature vector is then passed to a series of 1024D ReLU neural network layers (i.e., an MLP; the number of layers is tuned as a hyperparameter), then passed into a linear transformation, and then finally passed to a softmax layer, which yields a distribution over the three labels. The objective function Our objective combines a cross-entropy objective Ls for the SNLI classification task, cross-entropy objectives Lt p and Lt h for the parsing decision for each of the two sentences at each step t, and an L2 regularization term on the trained parameters. The terms are weighted using the tuned hyperparameters α and λ: (7) Lm =Ls + α T−1 X t=0 (Lt p + Lt h) + λ∥θ∥2 2 Initialization, optimization, and tuning We initialize the model parameters using the nonparametric strategy of He et al. (2015), with the exception of the softmax classifier parameters, which we initialize using random uniform samples from [−0.005, 0.005]. We use minibatch SGD with the RMSProp optimizer (Tieleman and Hinton, 2012) and a tuned starting learning rate that decays by a factor of 0.75 every 10k steps. We apply both dropout (Srivastava et al., 2014) and batch normalization (Ioffe and Szegedy, 2015) to the output of the word embedding projection layer and to the feature vectors that serve as the inputs and outputs to the MLP that precedes the final entailment classifier. We train each model for 250k steps in each run, using a batch size of 32. We track each model’s performance on the development set during training and save parameters when this performance reaches a new peak. We use early stopping, evaluating on the test set using the parameters that perform best on the development set. We use random search to tune the hyperparameters of each model, setting the ranges for search for each hyperparameter heuristically (and validating the reasonableness of the ranges on the development set), and then launching eight copies of each experiment each with newly sampled hyperparameters from those ranges. Table 2 shows the hyperparameters used in the best run of each model. 4.2 Models evaluated We evaluate four models. The four all use the sentence-pair classifier architecture described in Section 4.1, and differ only in the function computing the sentence encodings. First, a singlelayer LSTM RNN (similar to that of Bowman et al., 2015a) serves as a baseline encoder. Next, the minimal SPINN-PI-NT model (equivalent to a TreeLSTM) introduces the SPINN model design. SPINN-PI adds the tracking LSTM to that design. Finally, the full SPINN adds the integrated parser. We compare our models against several baselines, including the strongest published non-neural network-based result from Bowman et al. (2015a) and previous neural network models built around several types of sentence encoders. 1472 Model Params. Trans. acc. (%) Train acc. (%) Test acc. (%) Previous non-NN results Lexicalized classifier (Bowman et al., 2015a) — — 99.7 78.2 Previous sentence encoder-based NN results 100D LSTM encoders (Bowman et al., 2015a) 221k — 84.8 77.6 1024D pretrained GRU encoders (Vendrov et al., 2016) 15m — 98.8 81.4 300D Tree-based CNN encoders (Mou et al., 2016) 3.5m — 83.4 82.1 Our results 300D LSTM RNN encoders 3.0m — 83.9 80.6 300D SPINN-PI-NT (parsed input, no tracking) encoders 3.4m — 84.4 80.9 300D SPINN-PI (parsed input) encoders 3.7m — 89.2 83.2 300D SPINN (unparsed input) encoders 2.7m 92.4 87.2 82.6 Table 3: Results on SNLI 3-way inference classification. Params. is the approximate number of trained parameters (excluding word embeddings for all models). Trans. acc. is the model’s accuracy in predicting parsing transitions at test time. Train and test are SNLI classification accuracy. 4.3 Results Table 3 shows our results on SNLI. For the full SPINN, we also report a measure of agreement between this model’s parses and the parses included with SNLI, calculated as classification accuracy over transitions averaged across timesteps. We find that the bare SPINN-PI-NT model performs little better than the RNN baseline, but that SPINN-PI with the added tracking LSTM performs well. The success of SPINN-PI, which is the hybrid tree-sequence model, suggests that the treeand sequence-based encoding methods are at least partially complementary, with the sequence model presumably providing useful local word disambiguation. The full SPINN model with its relatively weak internal parser performs slightly less well, but nonetheless robustly exceeds the performance of the RNN baseline. Both SPINN-PI and the full SPINN significantly outperform all previous sentence-encoding models. Most notably, these models outperform the tree-based CNN of Mou et al. (2016), which also uses tree-structured composition for local feature extraction, but uses simpler pooling techniques to build sentence features in the interest of efficiency. Our results show that a model that uses tree-structured composition fully (SPINN) outperforms one which uses it only partially (tree-based CNN), which in turn outperforms one which does not use it at all (RNN). The full SPINN performed moderately well at reproducing the Stanford Parser’s parses of the SNLI data at a transition-by-transition level, with 92.4% accuracy at test time.3 However, its transi3Note that this is scoring the model against automatic tion prediction errors are fairly evenly distributed across sentences, and most sentences were assigned partially invalid transition sequences that either left a few words out of the final representation or incorporated a few padding tokens into the final representation. 4.4 Discussion The use of tree structure improves the performance of sentence-encoding models for SNLI. We suspect that this improvement is largely due to the more efficient learning of accurate generalizations overall, and not to any particular few phenomena. However, some patterns are identifiable in the results. While all four models under study have trouble with negation, the tree-structured SPINN models do quite substantially better on these pairs. This is likely due to the fact that parse trees make the scope of any instance of negation (the portion of the sentence’s content that is negated) relatively easy to identify and separate from the rest of the sentence. For test set sentence pairs like the one below where negation (not or n’t) does not appear in the premise but does appear in the hypothesis, the RNN shows 67% accuracy, while all three treestructured models exceed 73%. Only the RNN got the below example wrong: Premise: The rhythmic gymnast completes her floor exercise at the competition. Hypothesis: The gymnast cannot finish her exercise. Label: contradiction Note that the presence of negation in the hypothesis is correlated with a label of contradiction in SNLI, but not as strongly as one might intuit—only 45% of these examples in the test set are labeled as parses, not a human-judged gold standard. 1473 contradictions. In addition, it seems that tree-structured models, and especially the tree-sequence hybrid models, are more effective than RNNs at extracting informative representations of long sentences. The RNN model falls offin test accuracy more quickly with increasing sentence length than SPINN-PINT, which in turn falls of substantially faster than the two hybrid models, repeating a pattern seen more dramatically on artificial data in Bowman et al. (2015b). On pairs with premises of 20 or more words, the RNN’s 76.7% accuracy, while SPINN-PI reaches 80.2%. All three SPINN models labeled the following example correctly, while the RNN did not: Premise: A man wearing glasses and a ragged costume is playing a Jaguar electric guitar and singing with the accompaniment of a drummer. Hypothesis: A man with glasses and a disheveled outfit is playing a guitar and singing along with a drummer. Label: entailment We suspect that the hybrid nature of the full SPINN model is also responsible for its surprising ability to perform better than an RNN baseline even when its internal parser is relatively ineffective at producing correct full-sentence parses. It may act somewhat like the tree-based CNN, only with access to larger trees: using tree structure to build up local phrase meanings, and then using the tracking LSTM, at least in part, to combine those meanings. Finally, as is likely inevitable for models evaluated on SNLI, all four models under study did several percent worse on test examples whose ground truth label is neutral than on examples of the other two classes. Entailment–neutral and neutral–contradiction confusions appear to be much harder to avoid than entailment–contradiction confusions, where relatively superficial cues might be more readily useful. 5 Conclusions and future work We introduce a model architecture (SPINN-PI-NT) that is equivalent to a TreeLSTM, but an order of magnitude faster at test time. We expand that architecture into a tree-sequence hybrid model (SPINNPI), and show that this yields significant gains on the SNLI entailment task. Finally, we show that it is possible to exploit the strengths of this model without the need for an external parser by integrating a fast parser into the model (as in the full SPINN), and that the lack of external parse information yields little loss in accuracy. Because this paper aims to introduce a general purpose model for sentence encoding, we do not pursue the use of soft attention (Bahdanau et al., 2015; Rocktäschel et al., 2016), despite its demonstrated effectiveness on the SNLI task.4 However, we expect that it should be possible to productively combine our model with soft attention to reach state-of-the-art performance. Our tracking LSTM uses only simple, quick-tocompute features drawn from the head of the buffer and the head of the stack. It is plausible that giving the tracking LSTM access to more information from the buffer and stack at each step would allow it to better represent the context at each tree node, yielding both better parsing and better sentence encoding. One promising way to pursue this goal would be to encode the full contents of the stack and buffer at each time step following the method used by Dyer et al. (2015). For a more ambitious goal, we expect that it should be possible to implement a variant of SPINN on top of a modified stack data structure with differentiable push and pop operations (as in Grefenstette et al., 2015; Joulin and Mikolov, 2015). This would make it possible for the model to learn to parse using guidance from the semantic representation objective, which currently is blocked from influencing the key parsing parameters by our use of hard shift/reduce decisions. This change would allow the model to learn to produce parses that are, in aggregate, better suited to supporting semantic interpretation than those supplied in the training data. Acknowledgments We acknowledge financial support from a Google Faculty Research Award, the Stanford Data Science Initiative, and the National Science Foundation under grant nos. BCS 1456077 and IIS 1514268. Some of the Tesla K40s used for this research were donated by the NVIDIA Corporation. We also thank Kelvin Guu, Noah Goodman, and many others in the Stanford NLP group for helpful comments. 4Attention-based models like Rocktäschel et al. (2016), Wang and Jiang (2016), and the unpublished Cheng et al. (2016) have shown accuracies as high as 86.3% on SNLI, but are more narrowly engineered to suit the task and do not yield sentence encodings. 1474 References Alfred V. Aho and Jeffrey D. Ullman. 1972. The theory of parsing, translation, and compiling. Prentice-Hall, Inc. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). San Diego, CA. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems (NIPS) 29. Montréal, Québec, pages 1171–1179. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015a. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 632–642. Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. 2015b. Tree-structured composition in neural networks without treestructured architectures. In Proceedings of the 2015 NIPS Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches. Montréal, Québec, pages 50–55. Jan Buys and Phil Blunsom. 2015. Generative incremental dependency parsing with neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 863– 869. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 740– 750. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. arXiv:1601.06733. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. Evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, Springer, pages 177–190. David Dowty. 2007. Compositionality as an empirical problem. In Proceedings of the Brown University Conference on Direct Compositionality. Oxford Univ. Press. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 334–343. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 199–209. Ahmad Emami and Frederick Jelinek. 2005. A neural syntactic language model. Machine learning 60(1–3):195–227. Christoph Goller and Andreas Küchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In Proceedings of the IEEE International Conference on Neural Networks. Washington, DC, pages 347–352. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems (NIPS) 29. Montréal, Québec, pages 1828–1836. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the Interna1475 tional Conference on Computer Vision (ICCV). Santiago, Chile, pages 1026–1034. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume. Barcelona, Spain, pages 95–102. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Gérard Huet. 1997. The zipper. Journal of functional programming 7(5):549–554. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML). Lille, France. Ozan Irsoy and Claire Cardie. 2014. Deep recursive neural networks for compositionality in language. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems (NIPS) 27, Curran Associates, Inc., pages 2096–2104. Armand Joulin and Tomas Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems (NIPS) 29. Montréal, Québec. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 655–665. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Easy-first dependency parsing with hierarchical tree LSTMs. arXiv:1603.00375. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2304–2314. Lili Mou, Men Rui, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the ACL. Berlin, Germany. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT). pages 149–160. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1532–1543. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočisk`y, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of the International Conference on Learning Representations (ICLR). San Juan, Puerto Rico. Stuart M. Shieber. 1983. Sentence disambiguation by a shift-reduce parsing technique. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Cambridge, Massachusetts, USA, pages 113–118. Richard Socher, CliffChiung-Yu Lin, Andrew Ng, and Chris Manning. 2011a. Parsing natural scenes and natural language with recursive neural networks. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11). ACM, New York, NY, USA, ICML ’11, pages 129–136. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Edinburgh, Scotland, UK., pages 151–161. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15:1929– 1958. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neu1476 ral networks. In Advances in Neural Information Processing Systems (NIPS) 28. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1556– 1566. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv:1605.02688. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5 – RMSProp: Divide the gradient by a running average of its recent magnitude. In Neural Networks for Machine Learning, Coursera. Ivan Titov and James Henderson. 2010. A latent variable model for generative dependency parsing. In Harry Bunt, Paola Merlo, and Joakim Nivre, editors, Trends in Parsing Technology, Springer, Netherlands, pages 35–55. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In Proceedings of the International Conference on Learning Representations (ICLR). San Juan, Puerto Rico. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 1442– 1451. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems (NIPS) 29. Montréal, Québec, pages 649–657. Xingxing Zhang, Liang Lu, and Mirella Lapata. 2016. Top-down tree long short-term memory networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 310–320. 1477
2016
139
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 140–149, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Pointing the Unknown Words Caglar Gulcehre Universit´e de Montr´eal Sungjin Ahn Universit´e de Montr´eal Ramesh Nallapati IBM T.J. Watson Research Bowen Zhou IBM T.J. Watson Research Yoshua Bengio Universit´e de Montr´eal CIFAR Senior Fellow Abstract The problem of rare and unknown words is an important issue that can potentially effect the performance of many NLP systems, including traditional count-based and deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models using attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one predicts the location of a word in the source sentence, and the other predicts a word in the shortlist vocabulary. At each timestep, the decision of which softmax layer to use is adaptively made by an MLP which is conditioned on the context. We motivate this work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known. Using our proposed model, we observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset. 1 Introduction Words are the basic input/output units in most of the NLP systems, and thus the ability to cover a large number of words is a key to building a robust NLP system. However, considering that (i) the number of all words in a language including named entities is very large and that (ii) language itself is an evolving system (people create new words), this can be a challenging problem. A common approach followed by the recent neural network based NLP systems is to use a softmax output layer where each of the output dimension corresponds to a word in a predefined word-shortlist. Because computing high dimensional softmax is computationally expensive, in practice the shortlist is limited to have only topK most frequent words in the training corpus. All other words are then replaced by a special word, called the unknown word (UNK). The shortlist approach has two fundamental problems. The first problem, which is known as the rare word problem, is that some of the words in the shortlist occur less frequently in the training set and thus are difficult to learn a good representation, resulting in poor performance. Second, it is obvious that we can lose some important information by mapping different words to a single dummy token UNK. Even if we have a very large shortlist including all unique words in the training set, it does not necessarily improve the test performance, because there still exists a chance to see an unknown word at test time. This is known as the unknown word problem. In addition, increasing the shortlist size mostly leads to increasing rare words due to Zipf’s Law. These two problems are particularly critical in language understanding tasks such as factoid question answering (Bordes et al., 2015) where the words that we are interested in are often named entities which are usually unknown or rare words. In a similar situation, where we have a limited information on how to call an object of interest, it seems that humans (and also some primates) have an efficient behavioral mechanism of drawing attention to the object: pointing (Matthews et al., 2012). Pointing makes it possible to deliver information and to associate context to a particular object without knowing how to call it. In particular, human infants use pointing as a fundamental communication tool (Tomasello et al., 2007). In this paper, inspired by the pointing behavior of humans and recent advances in the atten140 tion mechanism (Bahdanau et al., 2014) and the pointer networks (Vinyals et al., 2015), we propose a novel method to deal with the rare or unknown word problem. The basic idea is that we can see many NLP problems as a task of predicting target text given context text, where some of the target words appear in the context as well. We observe that in this case we can make the model learn to point a word in the context and copy it to the target text, as well as when to point. For example, in machine translation, we can see the source sentence as the context, and the target sentence as what we need to predict. In Figure 1, we show an example depiction of how words can be copied from source to target in machine translation. Although the source and target languages are different, many of the words such as named entities are usually represented by the same characters in both languages, making it possible to copy. Similarly, in text summarization, it is natural to use some words in the original text in the summarized text as well. Specifically, to predict a target word at each timestep, our model first determines the source of the word generation, that is, whether to take one from a predefined shortlist or to copy one from the context. For the former, we apply the typical softmax operation, and for the latter, we use the attention mechanism to obtain the pointing softmax probability over the context words and pick the one of high probability. The model learns this decision so as to use the pointing only when the context includes a word that can be copied to the target. This way, our model can predict even the words which are not in the shortlist, as long as it appears in the context. Although some of the words still need to be labeled as UNK, i.e., if it is neither in the shortlist nor in the context, in experiments we show that this learning when and where to point improves the performance in machine translation and text summarization. Guillaume et Cesar ont une voiture bleue a Lausanne. Guillaume and Cesar have a blue car in Lausanne. Copy Copy Copy French: English: Figure 1: An example of how copying can happen for machine translation. Common words that appear both in source and the target can directly be copied from input to source. The rest of the unknown in the target can be copied from the input after being translated with a dictionary. The rest of the paper is organized as follows. In the next section, we review the related works including pointer networks and previous approaches to the rare/unknown problem. In Section 3, we review the neural machine translation with attention mechanism which is the baseline in our experiments. Then, in Section 4, we propose our method dealing with the rare/unknown word problem, called the Pointer Softmax (PS). The experimental results are provided in the Section 5 and we conclude our work in Section 6. 2 Related Work The attention-based pointing mechanism is introduced first in the pointer networks (Vinyals et al., 2015). In the pointer networks, the output space of the target sequence is constrained to be the observations in the input sequence (not the input space). Instead of having a fixed dimension softmax output layer, softmax outputs of varying dimension is dynamically computed for each input sequence in such a way to maximize the attention probability of the target input. However, its applicability is rather limited because, unlike our model, there is no option to choose whether to point or not; it always points. In this sense, we can see the pointer networks as a special case of our model where we always choose to point a context word. Several approaches have been proposed towards solving the rare words/unknown words problem, which can be broadly divided into three categories. The first category of the approaches focuses on improving the computation speed of the softmax output so that it can maintain a very large vocabulary. Because this only increases the shortlist size, it helps to mitigate the unknown word problem, but still suffers from the rare word problem. The hierarchical softmax (Morin and Bengio, 2005), importance sampling (Bengio and Sen´ecal, 2008; Jean et al., 2014), and the noise contrastive estimation (Gutmann and Hyv¨arinen, 2012; Mnih and Kavukcuoglu, 2013) methods are in the class. The second category, where our proposed method also belongs to, uses information from the context. Notable works are (Luong et al., 2015) and (Hermann et al., 2015). In particular, applying to machine translation task, (Luong et al., 2015) learns to point some words in source sentence and copy it to the target sentence, similarly to our method. However, it does not use attention mechanism, and by having fixed sized soft141 max output over the relative pointing range (e.g., -7, . . . , -1, 0, 1, . . . , 7), their model (the Positional All model) has a limitation in applying to more general problems such as summarization and question answering, where, unlike machine translation, the length of the context and the pointing locations in the context can vary dramatically. In question answering setting, (Hermann et al., 2015) have used placeholders on named entities in the context. However, the placeholder id is directly predicted in the softmax output rather than predicting its location in the context. The third category of the approaches changes the unit of input/output itself from words to a smaller resolution such as characters (Graves, 2013) or bytecodes (Sennrich et al., 2015; Gillick et al., 2015). Although this approach has the main advantage that it could suffer less from the rare/unknown word problem, the training usually becomes much harder because the length of sequences significantly increases. Simultaneously to our work, (Gu et al., 2016) and (Cheng and Lapata, 2016) proposed models that learn to copy from source to target and both papers analyzed their models on summarization tasks. 3 Neural Machine Translation Model with Attention As the baseline neural machine translation system, we use the model proposed by (Bahdanau et al., 2014) that learns to (soft-)align and translate jointly. We refer this model as NMT. The encoder of the NMT is a bidirectional RNN (Schuster and Paliwal, 1997). The forward RNN reads input sequence x = (x1, . . . , xT ) in left-to-right direction, resulting in a sequence of hidden states (−→ h 1, . . . , −→ h T ). The backward RNN reads x in the reversed direction and outputs (←− h 1, . . . , ←− h T ). We then concatenate the hidden states of forward and backward RNNs at each time step and obtain a sequence of annotation vectors (h1, . . . , hT ) where hj = h−→ h j||←− h j i . Here, || denotes the concatenation operator. Thus, each annotation vector hj encodes information about the j-th word with respect to all the other surrounding words in both directions. In the decoder, we usually use gated recurrent unit (GRU) (Cho et al., 2014; Chung et al., 2014). Specifically, at each time-step t, the softalignment mechanism first computes the relevance weight etj which determines the contribution of annotation vector hj to the t-th target word. We use a non-linear mapping f (e.g., MLP) which takes hj, the previous decoder’s hidden state st−1 and the previous output yt−1 as input: etj = f(st−1, hj, yt−1). The outputs etj are then normalized as follows: ltj = exp(etj) PT k=1 exp(etk) . (1) We call ltj as the relevance score, or the alignment weight, of the j-th annotation vector. The relevance scores are used to get the context vector ct of the t-th target word in the translation: ct = T X j=1 ltjhj , The hidden state of the decoder st is computed based on the previous hidden state st−1, the context vector ct and the output word of the previous time-step yt−1: st = fr(st−1, yt−1, ct), (2) where fr is GRU. We use a deep output layer (Pascanu et al., 2013) to compute the conditional distribution over words: p(yt = a|y<t, x) ∝ exp  ψa (Wo,bo)fo(st, yt−1, ct)  , (3) where W is a learned weight matrix and b is a bias of the output layer. fo is a single-layer feedforward neural network. ψ(Wo,bo)(·) is a function that performs an affine transformation on its input. And the superscript a in ψa indicates the a-th column vector of ψ. The whole model, including both the encoder and the decoder, is jointly trained to maximize the (conditional) log-likelihood of target sequences given input sequences, where the training corpus is a set of (xn, yn)’s. Figure 2 illustrates the architecture of the NMT. 4 The Pointer Softmax In this section, we introduce our method, called as the pointer softmax (PS), to deal with the rare and unknown words. The pointer softmax can be 142 lt wt st-1 ct ... st h0 hk ... ... st+1 Encoder Figure 2: A depiction of neural machine translation architecture with attention. At each timestep, the model generates the attention weights lt. We use lt the encoder’s hidden state to obtain the context ct. The decoder uses ct to predict a vector of probabilities for the words wt by using softmax. applicable approach to many NLP tasks, because it resolves the limitations about unknown words for neural networks. It can be used in parallel with other existing techniques such as the large vocabulary trick (Jean et al., 2014). Our model learns two key abilities jointly to make the pointing mechanism applicable in more general settings: (i) to predict whether it is required to use the pointing or not at each time step and (ii) to point any location of the context sequence whose length can vary widely over examples. Note that the pointer networks (Vinyals et al., 2015) are in lack of the ability (i), and the ability (ii) is not achieved in the models by (Luong et al., 2015). To achieve this, our model uses two softmax output layers, the shortlist softmax and the location softmax. The shortlist softmax is the same as the typical softmax output layer where each dimension corresponds a word in the predefined word shortlist. The location softmax is a pointer network where each of the output dimension corresponds to the location of a word in the context sequence. Thus, the output dimension of the location softmax varies according to the length of the given context sequence. At each time-step, if the model decides to use the shortlist softmax, we generate a word wt from the shortlist. Otherwise, if it is expected that the context sequence contains a word which needs to be generated at the time step, we obtain the location of the context word lt from the location softmax. The key to making this possible is deciding when to use the shortlist softmax or the location softmax at each time step. In order to accomplish this, we introduce a switching network to the model. The switching network, which is a multilayer perceptron in our experiments, takes the representation of the context sequence (similar to the input annotation in NMT) and the previous hidden state of the output RNN as its input. It outputs a binary variable zt which indicates whether to use the shortlist softmax (when zt = 1) or the location softmax (when zt = 0). Note that if the word that is expected to be generated at each timestep is neither in the shortlist nor in the context sequence, the switching network selects the shortlist softmax, and then the shortlist softmax predicts UNK. The details of the pointer softmax model can be seen in Figure 3 as well. lt wt st-1 ct p 1 - p ft ... st h0 hk ... Encoder Figure 3: A simple depiction of the Pointer Softmax(PS) architecture. At each timestep as usuallt, ct and the wt for the words over the limited vocabulary(shortlist) is being generated. We have an additional switching variable zt that decides whether to use wt or copy the word from the input via lt. The final word prediction will be performed via pointer softmax ft which can either copy the word from the source or predict the word from the shortlist vocabulary. More specifically, our goal is to maximize the probability of observing the target word sequence y = (y1, y2, . . . , yTy) and the word generation source z = (z1, z2, . . . , zTy), given the context sequence x = (x1, x2, . . . , xTx): pθ(y, z|x) = Ty Y t=1 pθ(yt, zt|y<t, z<t, x). (4) 143 Note that the word observation yt can be either a word wt from the shortlist softmax or a location lt from the location softmax, depending on the switching variable zt. Considering this, we can factorize the above equation further p(y, z|x) = Y t∈Tw p(wt, zt|(y, z)<t, x) × Y t′∈Tl p(lt′, zt′|(y, z)<t′, x). (5) Here, Tw is a set of time steps where zt = 1, and Tl is a set of time-steps where zt = 0. And, Tw∪Tl = {1, 2, . . . , Ty} and Tw ∩Tl = ∅. We denote all previous observations at step t by (y, z)<t. Note also that ht = f((y, z)<t). Then, the joint probabilities inside each product can be further factorized as follows: p(wt, zt|(y, z)<t) = p(wt|zt = 1, (y, z)<t) × p(zt = 1|(y, z)<t) (6) p(lt, zt|(y, z)<t) = p(lt|zt = 0, (y, z)<t) × p(zt = 0|(y, z)<t) (7) here, we omitted x which is conditioned on all probabilities in the above. The switch probability is modeled as a multilayer perceptron with binary output: p(zt = 1|(y, z)<t, x) = σ(f(x, ht−1; θ)) (8) p(zt = 0|(y, z)<t, x) = 1 −σ(f(x, ht−1; θ)). (9) And p(wt|zt = 1, (y, z)<t, x) is the shortlist softmax and p(lt|zt = 0, (y, z)<t, x) is the location softmax which can be a pointer network. σ(·) stands for the sigmoid function, σ(x) = 1 exp(-x)+1. Given N such context and target sequence pairs, our training objective is to maximize the following log likelihood w.r.t. the model parameter θ arg max θ 1 N N X n=1 log pθ(yn, zn|xn). (10) 4.1 Basic Components of the Pointer Softmax In this section, we discuss practical details of the three fundamental components of the pointer softmax. The interactions between these components and the model is depicted in Figure 3. Location Softmax lt : The location of the word to copy from source text to the target is predicted by the location softmax lt. The location softmax outputs the conditional probability distribution p(lt|zt = 0, (y, z)<t, x). For models using the attention mechanism such as NMT, we can reuse the probability distributions over the source words in order to predict the location of the word to point. Otherwise we can simply use a pointer network of the model to predict the location. Shortlist Softmax wt : The subset of the words in the vocabulary V is being predicted by the shortlist softmax wt. Switching network dt : The switching network dt is an MLP with sigmoid output function that outputs a scalar probability of switching between lt and wt, and represents the conditional probability distribution p(zt|(y, z)<t, x). For NMT model, we condition the MLP that outputs the switching probability on the representation of the context of the source text ct and the hidden state of the decoder ht. Note that, during the training, dt is observed, and thus we do not have to sample. The output of the pointer softmax, ft will be the concatenation of the the two vectors, dt × wt and (1 −dt) × lt. At test time, we compute Eqn. (6) and (7) for all shortlist word wt and all location lt, and pick the word or location of the highest probability. 5 Experiments In this section, we provide our main experimental results with the pointer softmax on machine translation and summarization tasks. In our experiments, we have used the same baseline model and just replaced the softmax layer with pointer softmax layer at the language model. We use the Adadelta (Zeiler, 2012) learning rule for the training of NMT models. The code for pointer softmax model is available at https://github.com/ caglar/pointer_softmax. 5.1 The Rarest Word Detection We construct a synthetic task and run some preliminary experiments in order to compare the results with the pointer softmax and the regular softmax’s performance for the rare-words. The vocabulary size of our synthetic task is |V |= 600 using sequences of length 7. The words in the sequences are sampled according to their unigram distribu144 tion which has the form of a geometric distribution. The task is to predict the least frequent word in the sequence according to unigram distribution of the words. During the training, the sequences are generated randomly. Before the training, validation and test sets are constructed with a fixed seed. We use a GRU layer over the input sequence and take the last-hidden state, in order to get the summary ct of the input sequence. The wt, lt are only conditioned on ct, and the MLP predicting the dt is conditioned on the latent representations of wt and lt. We use minibatches of size 250 using adam adaptive learning rate algorithm (Kingma and Adam, 2015) using the learning rate of 8 × 10−4 and hidden layers with 1000 units. We train a model with pointer softmax where we assign pointers for the rarest 60 words and the rest of the words are predicted from the shortlist softmax of size 540. We observe that increasing the inverse temperature of the sigmoid output of dt to 2, in other words making the decisions of dt to become sharper, works better, i.e. dt = σ(2x). At the end of training with pointer softmax we obtain the error rate of 17.4% and by using softmax over all 600 tokens, we obtain the error-rate of 48.2%. 5.2 Summarization In these series of experiments, we use the annotated Gigaword corpus as described in (Rush et al., 2015). Moreover, we use the scripts that are made available by the authors of (Rush et al., 2015) 1 to preprocess the data, which results to approximately 3.8M training examples. This script generates about 400K validation and an equal number of test examples, but we use a randomly sampled subset of 2000 examples each for validation and testing. We also have made small modifications to the script to extract not only the tokenized words, but also system-generated named-entity tags. We have created two different versions of training data for pointers, which we call UNK-pointers data and entity-pointers data respectively. For the UNK-pointers data, we trim the vocabulary of the source and target data in the training set and replace a word by the UNK token whenever a word occurs less than 5 times in either source or target data separately. Then, we create pointers 1https://github.com/facebook/NAMAS from each UNK token in the target data to the position in the corresponding source document where the same word occurs in the source, as seen in the data before UNKs were created. It is possible that the source can have an UNK in the matching position, but we still created a pointer in this scenario as well. The resulting data has 2.7 pointers per 100 examples in the training set and 9.1 pointers rate in the validation set. In the entity-pointers data, we exploit the named-entity tags in the annotated corpus and first anonymize the entities by replacing them with an integer-id that always starts from 1 for each document and increments from left to right. Entities that occur more than once in a single document share the same id. We create the anonymization at token-level, so as to allow partial entity matches between the source and target for multi-token entities. Next, we create a pointer from the target to source on similar lines as before, but only for exact matches of the anonymized entities. The resulting data has 161 pointers per 100 examples in the training set and 139 pointers per 100 examples in the validation set. If there are multiple matches in the source, either in the UNK-pointers data or the entitypointers data, we resolve the conflict in favor of the first occurrence of the matching word in the source document. In the UNK data, we model the UNK tokens on the source side using a single placeholder embedding that is shared across all documents, and in the entity-pointers data, we model each entity-id in the source by a distinct placeholder, each of which is shared across all documents. In all our experiments, we use a bidirectional GRU-RNN (Chung et al., 2014) for the encoder and a uni-directional RNN for the decoder. To speed-up training, we use the large-vocabulary trick (Jean et al., 2014) where we limit the vocabulary of the softmax layer of the decoder to 2000 words dynamically chosen from the words in the source documents of each batch and the most common words in the target vocabulary. In both experiments, we fix the embedding size to 100 and the hidden state dimension to 200. We use pretrained word2vec vectors trained on the same corpus to initialize the embeddings, but we finetuned them by backpropagating through the embeddings during training. Our vocabulary sizes are fixed to 125K for source and 75K for target for both exper145 iments. We use the reference data for pointers for the model only at the training time. During the test time, the switch makes a decision at every timestep on which softmax layer to use. For evaluation, we use full-length Rouge F1 using the official evaluation tool 2. In their work, the authors of (Bahdanau et al., 2014) use full-length Rouge Recall on this corpus, since the maximum length of limited-length version of Rouge recall of 75 bytes (intended for DUC data) is already long for Gigaword summaries. However, since full-length Recall can unfairly reward longer summaries, we also use full-length F1 in our experiments for a fair comparison between our models, independent of the summary length. The experimental results comparing the Pointer Softmax with NMT model are displayed in Table 1 for the UNK pointers data and in Table 2 for the entity pointers data. As the experiments show, pointer softmax improves over the baseline NMT on both UNK data and entities data. Our hope was that the improvement would be larger for the entities data since the incidence of pointers was much greater. However, it turns out this is not the case, and we suspect the main reason is anonymization of entities which removed datasparsity by converting all entities to integer-ids that are shared across all documents. We believe that on de-anonymized data, our model could help more, since the issue of data-sparsity is more acute in this case. Table 1: Results on Gigaword Corpus when pointers are used for UNKs in the training data, using Rouge-F1 as the evaluation metric. Rouge-1 Rouge-2 Rouge-L NMT + lvt 34.87 16.54 32.27 NMT + lvt + PS 35.19 16.66 32.51 Table 2: Results on anonymized Gigaword Corpus when pointers are used for entities, using RougeF1 as the evaluation metric. Rouge-1 Rouge-2 Rouge-L NMT + lvt 34.89 16.78 32.37 NMT + lvt + PS 35.11 16.76 32.55 2http://www.berouge.com/Pages/default. aspx Table 3: Results on Gigaword Corpus for modeling UNK’s with pointers in terms of recall. Rouge-1 Rouge-2 Rouge-L NMT + lvt 36.45 17.41 33.90 NMT + lvt + PS 37.29 17.75 34.70 In Table 3, we provide the results for summarization on Gigaword corpus in terms of recall as also similar comparison done by (Rush et al., 2015). We observe improvements on all the scores with the addition of pointer softmax. Let us note that, since the test set of (Rush et al., 2015) is not publicly available, we sample 2000 texts with their summaries without replacement from the validation set and used those examples as our test set. In Table 4 we present a few system generated summaries from the Pointer Softmax model trained on the UNK pointers data. From those examples, it is apparent that the model has learned to accurately point to the source positions whenever it needs to generate rare words in the summary. 5.3 Neural Machine Translation In our neural machine translation (NMT) experiments, we train NMT models with attention over the Europarl corpus (Bahdanau et al., 2014) over the sequences of length up to 50 for English to French translation. 3. All models are trained with early-stopping which is done based on the negative log-likelihood (NLL) on the development set. Our evaluations to report the performance of our models are done on newstest2011 by using BLUE score. 4 We use 30, 000 tokens for both the source and the target language shortlist vocabularies (1 of the token is still reserved for the unknown words). The whole corpus contains 134, 831 unique English words and 153, 083 unique French words. We have created a word-level dictionary from French to English which contains translation of 15,953 words that are neither in shortlist vocabulary nor dictionary of common words for both the source and the target. There are about 49, 490 words shared between English and French parallel corpora of Europarl. 3In our experiments, we use an existing code, provided in https://github.com/kyunghyuncho/ dl4mt-material, and on the original model we only changed the last softmax layer for our experiments 4We compute the BLEU score using the multi-blue.perl script from Moses on tokenized sentence pairs. 146 Table 4: Generated summaries from NMT with PS. Boldface words are the words copied from the source. Source #1 china ’s tang gonghong set a world record with a clean and jerk lift of ### kilograms to win the women ’s over-## kilogram weightlifting title at the asian games on tuesday . Target #1 china ’s tang <unk>,sets world weightlifting record NMT+PS #1 china ’s tang gonghong wins women ’s weightlifting weightlifting title at asian games Source #2 owing to criticism , nbc said on wednesday that it was ending a three-month-old experiment that would have brought the first liquor advertisements onto national broadcast network television . Target #2 advertising : nbc retreats from liquor commercials NMT+PS #2 nbc says it is ending a three-month-old experiment Source #3 a senior trade union official here wednesday called on ghana ’s government to be “ mindful of the plight ” of the ordinary people in the country in its decisions on tax increases . Target #3 tuc official,on behalf of ordinary ghanaians NMT+PS #3 ghana ’s government urged to be mindful of the plight During the training, in order to decide whether to pick a word from the source sentence using attention/pointers or to predict the word from the short-list vocabulary, we use a simple heuristic. If the word is not in the short-list vocabulary, we first check if the word yt itself appears in the source sentence. If it is not, we check if the word itself is in the source sentence by using the shared words lookup table for the source and the target language. If the word is in the source sentence, we then use the location of the word in the source as the target. Otherwise we check if one of the English senses from the cross-language dictionary of the French word is in the source. If it is in the source sentence, then we use the location of that word as our translation. Otherwise we just use the argmax of lt as the target. For switching network dt, we observed that using a two-layered MLP with noisy-tanh activation (Gulcehre et al., 2016) function with residual connection from the lower layer (He et al., 2015) activation function to the upper hidden layers improves the BLEU score about 1 points over the dt using ReLU activation function. We initialized the biases of the last sigmoid layer of dt to −1 such that if dt becomes more biased toward choosing the shortlist vocabulary at the beginning of the training. We renormalize the gradients if the norm of the gradients exceed 1 (Pascanu et al., 2012). In Table 5, we provided the result of NMT with pointer softmax and we observe about 3.6 BLEU Table 5: Europarl Dataset (EN-FR) BLEU-4 NMT 20.19 NMT + PS 23.76 score improvement over our baseline. In Figure 4, we show the validation curves of the NMT model with attention and the NMT model with shortlist-softmax layer. Pointer softmax converges faster in terms of number of minibatch updates and achieves a lower validation negative-log-likelihood (NLL) (63.91) after 200k updates over the Europarl dataset than the NMT model with shortlist softmax trained for 400k minibatch updates (65.26). Pointer softmax converges faster than the model using the shortlist softmax, because the targets provided to the pointer softmax also acts like guiding hints to the attention. 6 Conclusion In this paper, we propose a simple extension to the traditional soft attention-based shortlist softmax by using pointers over the input sequence. We show that the whole model can be trained jointly with single objective function. We observe noticeable improvements over the baselines on machine translation and summarization tasks by using pointer softmax. By doing a very simple mod147 0 5 10 15 20 25 30 35 40 # Iterations (x5000 minibatches) 60 70 80 90 100 110 120 130 140 150 Valid NLL Original model Pointer Softmax Figure 4: A comparison of the validation learningcurves of the same NMT model trained with pointer softmax and the regular softmax layer. As can be seen from the figures, the model trained with pointer softmax converges faster than the regular softmax layer. Switching network for pointer softmax in this Figure uses ReLU activation function. ification over the NMT, our model is able to generalize to the unseen words and can deal with rarewords more efficiently. For the summarization task on Gigaword dataset, the pointer softmax was able to improve the results even when it is used together with the large-vocabulary trick. In the case of neural machine translation, we observed that the training with the pointer softmax is also improved the convergence speed of the model as well. For French to English machine translation on Europarl corpora, we observe that using the pointer softmax can also improve the training convergence of the model. References [Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. [Bengio and Sen´ecal2008] Yoshua Bengio and JeanS´ebastien Sen´ecal. 2008. Adaptive importance sampling to accelerate training of a neural probabilistic language model. Neural Networks, IEEE Transactions on, 19(4):713–722. [Bordes et al.2015] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Largescale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. [Cheng and Lapata2016] Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252. [Cho et al.2014] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. [Chung et al.2014] Junyoung Chung, C¸ aglar G¨ulc¸ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555. [Gillick et al.2015] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing from bytes. arXiv preprint arXiv:1512.00103. [Graves2013] Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. [Gu et al.2016] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. [Gulcehre et al.2016] Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. 2016. Noisy activation functions. arXiv preprint arXiv:1603.00391. [Gutmann and Hyv¨arinen2012] Michael U Gutmann and Aapo Hyv¨arinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. The Journal of Machine Learning Research, 13(1):307–361. [He et al.2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for mage recognition. arXiv preprint arXiv:1512.03385. [Hermann et al.2015] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684–1692. [Jean et al.2014] S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007. [Kingma and Adam2015] Diederik P Kingma and Jimmy Ba Adam. 2015. A method for stochastic optimization. In International Conference on Learning Representation. [Luong et al.2015] Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of ACL. 148 [Matthews et al.2012] Danielle Matthews, Tanya Behne, Elena Lieven, and Michael Tomasello. 2012. Origins of the human pointing gesture: a training study. Developmental science, 15(6):817– 829. [Mnih and Kavukcuoglu2013] Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in Neural Information Processing Systems, pages 2265–2273. [Morin and Bengio2005] Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Aistats, volume 5, pages 246–252. Citeseer. [Pascanu et al.2012] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063. [Pascanu et al.2013] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2013. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026. [Rush et al.2015] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. CoRR, abs/1509.00685. [Schuster and Paliwal1997] Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673–2681. [Sennrich et al.2015] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. [Theano Development Team2016] Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May. [Tomasello et al.2007] Michael Tomasello, Malinda Carpenter, and Ulf Liszkowski. 2007. A new look at infant pointing. Child development, 78(3):705–722. [Vinyals et al.2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2674–2682. [Zeiler2012] Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. 7 Acknowledgments We would also like to thank the developers of Theano 5, for developing such a powerful tool 5http://deeplearning.net/software/ theano/ for scientific computing (Theano Development Team, 2016). We acknowledge the support of the following organizations for research funding and computing support: NSERC, Samsung, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. C. G. thanks for IBM T.J. Watson Research for funding this research during his internship between October 2015 and January 2016. 149
2016
14
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1478–1488, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Investigating Language Universal and Specific Properties in Word Embeddings Peng Qian Xipeng Qiu∗ Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {pqian11, xpqiu, xjhuang}@fudan.edu.cn Abstract Recently, many NLP tasks have benefited from distributed word representation. However, it remains unknown whether embedding models are really immune to the typological diversity of languages, despite the language-independent architecture. Here we investigate three representative models on a large set of language samples by mapping dense embedding to sparse linguistic property space. Experiment results reveal the language universal and specific properties encoded in various word representation. Additionally, strong evidence supports the utility of word form, especially for inflectional languages. 1 Introduction Word representation is a core issue in natural language processing. Context-based word representation, which is inspired by Harris (1954), has achieved huge successes in many NLP applications. Despite its popularity, character-based approach also comes out as an equal competitor (Santos and Zadrozny, 2014; Kim et al., 2016; Ling et al., 2015b; Ling et al., 2015a; Faruqui et al., 2016; Ballesteros et al., 2015) . Moreover, questions arise when we consider what these models could capture from linguistic cues under the perspective of cross-language typological diversity, as is argued by Bender (2009). Despite previous efforts in empirically interpreting word embedding and exploring the intrinsic/extrinsic factors in learning process (Andreas and Klein, 2014; Lai et al., 2015; K¨ohn, 2015; Melamud et al., 2016), it remains unknown whether embedding models are really immune to the structural variance of languages. ∗Corresponding author. Current research has gaps for understanding model behaviours towards language typological diversity as well as the utility of context and form for different languages. Thus, we select three representative types of models and design a series of experiments to reveal the universals and specifics of various word representations on decoding linguistic properties. Our work contributes to shedding new insights into the following topics: a) How do typological differences of language structure influence a word embedding model? Does a model behave similarly towards phylogenetically-related languages? b) Is word form a more efficient predictor of a certain grammatical function than word context for specific languages? c) How do the neurons of a model respond to linguistic features? Can we explain the utility of context and form by analyzing neuron activation pattern? 2 Experiment Design To study the proposed questions above, we design four series of experiments to comprehensively compare context-based and character-based word representations on different languages, covering syntactic, morphological and semantic properties. The basic paradigm is to decode interpretable linguistic features from a target collection of word representations. We hypothesize that there exists a linear/nonlinear map between a word representation x and a high-level sparse feature vector y if the word vector implicitly encode sufficient information1. Figure1 visualizes how a 1Our experiment results show that nonlinear mapping model significantly works better than linear map for all languages. Only nonlinear mapping accuracies are mentioned in the following sections due to the space limit. 1478 word embedding is mapped to different linguistic attribute vectors. For example, the Czech word dˇetem means children in English. Its grammatical gender is female. It is in the plural form and should be used in dative case. These are all important properties of a word. The word embedding of dˇetem is mapped to different sparse representation of these lexical properties respectively. Listed in Table1 is the outline of the experiments. ID Attribute Category I Part-of-Speech Syntax II Dependency Relation III Gender / Number / Case Morphology Animacy / Definite / Person Tense / Aspect / Mood / Voice PronType / VerbForm IV Sentiment Score Semantics Table 1: Outline of Experiment Design. For linear map, we train a matrix Θ that maps word embedding x to a sparse feature vector y with the least L2 error. For nonlinear map, we train a neural network (MLP) with 4 hidden layers via back propagation. Their dimensions are 50, 80, 80, and 50 in order. For each linguistic feature of each language, a mapping model is trained on the randomly-selected 90% of the words with the target feature and tested over the remaining 10%. Details about the construction of the linguistic feature vectors will be mentioned in the specific section of a certain experiment. For syntactic and morphological features, we construct the corresponding feature vectors of a word from the Universal Dependencies Treebank (Joakim Nivre and Zhu, 2015) and the Chinese Treebank (CTB 7.0) (Xue et al., 2010). For a certain word w with a certain linguistic attribute a (e.g. POS), w may be annotated with one or different labels (e.g. NOUN, VERB, etc) from the possible label set of a in the whole treebank. We calculate the normalized label frequency distribution ⃗ya w from the manual annotation of the corpus as the representation of the linguistic attribute a for the word w in each language. For word sentiment feature, we use the manually annotated data collected by Dodds et al. (2015). The data contains emotion scores for a list of words in several languages. In our experiment, the original score scale in Dodds et al. (2015) is transformed into the interval [0, 1]. 0 1 0 0 1 0 0 0 0 0 0.082 -0.022 0.077 0.006 0.051 ··· -0.137 0.082 -0.022 0.077 0.066 0.051 -0.137 ··· 1 0 0 ··· dětem (chidren.Female.Plural.Dative) Gender Case Number 0.082 -0.022 0.077 0.066 0.051 -0.137 ··· dětem (chidren.Female.Plural.Dative) 0 1 0 0 1 0 0 0 0 0 1 0 0 ··· Gender Case Number Figure 1: Visualizing experiment paradigm. The dense representation of a Czech word dˇetem is mapped to different sparse representation of the lexical properties respectively. 3 Embedding Model Description Faced with three questions proposed before, we select the following models from various candidates, as they are popular, representative and based on either word context or purely word form. Type I C&W Model (referred as CW in short), which aims to estimate the joint probability of a word sequence (Collobert et al., 2011). In this paper, C&W word vectors are all from the released version of the polyglot multilingual embeddings (Al-Rfou et al., 2013) trained on Wikipedia. Type II Skip-gram2 (referred as SG in short), which aims to predict the context words based on the target word. We use word2vec (Mikolov et al., 2013) to train SG on multilingual Wikipediea provided by (Al-Rfou et al., 2013). Type III Character-based LSTM autoencoder (referred as AE in short), which takes the character sequence of a word as the input and reconstruct the input character sequence. It takes the advantage of pure word form instead of the context. The hidden layer vector of the model is used as a representation of the word. In this way, we are able to quantify the utility of pure word form by evaluating the representation generated from the character-based LSTM autoencoder on different decoding tasks. We trained one-hidden layer AE with the words covered in CW for each language independently. To ensure a fair comparison, all the word vectors have the same dimension 64. CW and SG are trained with a common 5-word window size. 4 Results 4.1 Part-of-Speech In experiment I, we decode Part-of-Speech, the most basic syntactic feature, from word embed2SG results for some languages are missed due to the lack of the corpus data or special preprocessing. 1479 ISO Language |V | CW SG AE ar Arabic 24967 0.712 0.658 0.648 I ga Irish 3164 0.826 – 0.697 zh Chinese 30496 0.780 0.721 n/a II fa Persian 11471 0.895 0.827 0.746 III la Latin 6678 0.746 – 0.707 hi Hindi 12703 0.858 0.799 0.592 IV ta Tamil 1940 0.768 – 0.541 eu Basque 11212 0.857 – 0.711 et Estonian 2166 0.862 0.765 0.530 V fi Finnish 26086 0.910 0.818 0.715 hu Hungarian 6105 0.912 0.831 0.674 de German 29899 0.916 0.902 0.74 VI fr French 29445 0.905 0.889 0.759 VII pt Portuguese 17715 0.927 0.903 0.746 he Hebrew 22754 0.911 – 0.680 ru Russian 55416 0.959 0.913 0.906 hr Croatian 12581 0.926 0.862 0.790 da Danish 10705 0.913 0.913 0.666 sv Swedish 8408 0.938 0.888 0.670 no Norwegian 18709 0.926 0.861 0.704 sl Slovenian 19514 0.919 0.820 0.756 cs Czech 55789 0.949 0.883 0.853 ro Romanian 3170 0.858 0.814 0.618 en English 15116 0.857 0.839 0.659 id Indonesian 15635 0.852 0.819 0.801 it Italian 21184 0.902 0.880 0.700 VIII es Spanish 33696 0.906 0.883 0.75 el Greek 8499 0.937 0.879 0.801 pl Polish 18062 0.941 0.842 0.800 bg Bulgarian 17079 0.920 0.852 0.741 Table 2: Model comparison on decoding POS, along with WALS word-order features. Type I: VS+VO+Pre+NR. II: SV+VO+Pre+RN. III: SV+OV+Pre+NR. IV: SV+OV+Post+RN/Co. V: SV+OV+Post+NR. VI: SV+ND+Pre+NR. VII: SV+VO+Pre+NR. VIII: ND+VO+Pre+NR. ding. To construct the POS vector for each word, we calculate the normalized POS-tag frequency distribution from the manual annotation of the Universal Dependencies (Version 1.2) (De Marneffe et al., 2014) and Chinese Treebank (CTB 7.0) (Xue et al., 2010) for each language. We evaluate the predicted results by judging whether the most probable POS tag of a word predicted by the model equals to the most probable correct POS tag of the word. Formally, for a set of words W in a language, the correct tag of the ith word Wi is ya Wi and the predicted tag is ˆya Wi. The accuracy is computed as: acc = 1 |W| |W| X i ∆(ˆya Wi, ya Wi) (1) ∆(ˆya Wi, ya Wi) = ( 1 ˆya Wi = ya Wi 0 otherwise (2) It is obvious that context-based representation (CW and SG) performs better than character-based representation (AE). We, however, notice that AE peforms nearly as well as the context-based embedding on Russian, Czech and Indonesian. 0.7 0.75 0.8 0.85 0.9 0.95 I II III IV V VI VII Accuracy Word Order Type Figure 2: Interaction between CW performances on decoding POS tag and WALS word order features. It turns out that these languages employ affix markers to indicate the POS category of a word. For example, in Indonesian, co-occurrence of the prefix ‘me-’ and the suffix ‘-kan’ in the word form means that this word is a verb. Besides, we explore the relationship between CW performances on decoding POS tags and the word order typology of different languages, since CW is sensitive to word order. We classify the languages into 8 types, based on the basic word order features (Order of Subject and Verb; Order of Object and Verb; Order of Noun and Adposition; Order of Noun and Relative clause) from the World Atlas of Language Structures (Dryer and Haspelmath, 2013). Figure 2 shows that CW performs similar in this experiment for languages of the same word order type, indicating an implicit interaction between typological diversity and model performance. 4.2 Dependency Relation In this section, we will get into the details of Experiment II: decoding dependency relation from word representation. Dependency relation refers to how a word is syntactically related to other words in a sentence. It is the label annotated on the arc of the dependency tree. We compute the normalized frequency distribution of dependency relations for each word in the Universal Dependency Treebank and Chinese Treebank (CTB 7.0) (Xue et al., 2010). The distribution of dependency relations is the probabilistic distribution of different arc types, such as subject, object, nmod, etc. Evaluation is similar to that in Section 4.1. We can see from Figure 3 that the overall performance is worse than that in Experiment I, as dependency analysis is more difficult than POS induction. CW achieves the best performance. It 1480 zh id fa sv en no da de hi fr ro it es pt el et fibg hr sl pl 0.3 0.4 0.5 0.6 Slavic Accuracy CW SG AE Figure 3: Comparison of models on decoding DEPENDENCY RELATION. cs sl pl bg no da sv el hi it es pt 0.7 0.8 0.9 1 Romance Slavic Accuracy CW SG AE Figure 4: Comparison of models on decoding GENDER. is also interesting to see that all the embeddings work slightly better on Slavic languages. 4.3 Morphological Features Experiment III aims to decode morphological information from various word representation. Evaluation is similar to that in Section 4.1. Morphological information refers to the explicit marker of the grammatical functions. We consider 12 morphological features, as is shown in Table 1. They can be split into 5 nominal features (GENDER, NUMBER, CASE, ANIMACY, DEFINITENESS) and 7 verbal features (PERSON, TENSE, ASPECT, MOOD, VOICE, PRONOUNTYPE, VERBFORM). Gender is a very special feature for western languages. It is partially based on semantics, such as biological sex. In most of the languages with gender features, there are agreements between the noun and the determiners. This could be a good indicator for context-based model. On the other hand, gender is also expressed as an inflectional feature via declension or umlaut, especially for adjectives and verbs. Therefore, we can see from Figure 4 that the AE also achieves some good results without using context information. From a typological perspective, we found that all the embeddings work well on decoding word gender of Romance languages (Italian, Spanish and Portuguese) but worst on Slavic languages (e.g. Czech, Slovenian). This is probably hi fa cs pl bg hr sl da sv no en el he it es pt ro fiet hu 0.9 1 Romance Slavic Accuracy CW SG AE Figure 5: Model comparison on decoding NUMBER. Language |V | C&W SG AE # Case Analy. Danish 372 0.947 0.946 1.000 3 Swedish 5893 0.995 0.990 0.981 2 Bulgarian 104 0.636 0.546 0.818 4 Agglut. Finnish 21094 0.868 0.871 0.908 15 Hungarian 4536 0.852 0.901 22 Tamil 1144 0.896 – 0.835 7 Basque 8020 0.761 – 0.857 15 Fusional Hindi 10682 0.712 0.704 0.646 7 Czech 38666 0.788 0.776 0.663 7 Polish 13715 0.828 0.785 0.636 7 Slovenian 15150 0.796 0.768 0.617 6 Croatian 9945 0.807 0.789 0.628 7 Greek 5790 0.841 0.851 0.774 5 Latin 4773 0.674 – 0.636 7 Table 3: Model comparison on decoding CASE. because that Romance languages employ regular rules to judge the gender of a word. However, Slavic languages have other nonlinear fusional morphological features that are not easy to tackle. Number refers to the linguistic abstraction of objects’ quantities. It is an inflectional feature of noun and other parts of speech (adjective, verb) that have agreement with noun. The basic value can be singular, dual or plural. We can see from Figure 5 that SG, CW and AE all perform well. AE performs almost as well as CW and SG on English, Spanish and Portuguese. Case is one of the most significant features. Gender and number are indexical morphemes, which means that there is a phrase in the sentence that necessarily agrees with the target item. Case, on the contrary, is a relational morpheme, according to (Croft, 2002). Case reflects the semantic role of a noun, relative to the pivot verb. All the languages studied in this paper, more or less, employ word inflection to explicitly express the specific case role. The model performances are listed in Table 3. We notice some important inter-language differences. Swedish has only two cases, nominal and genitive. The form of genitive case is very simple. Adding an s to the coda of a noun will change it to genitive case. Thus, we can see that character-based encoding performs well 1481 on Swedish. Since genitive case usually means possession, we also notice that context-based distributed representation also performs well in decoding case information from Swedish words. By classifying these languages into different morphological types in Table 3, we find that word vectors of highly inflected fusional languages (e.g. Czech) performs worse than agglutinative languages (e.g. Finnish). This is typically reflected in AE, as agglutinative languages simply concatenate the case marker with the nominative form of a noun. The morphological transformation of agglutinative languages is linear and simple. Besides, the case system of the analytic languages has been largely simplified due to historical change. Therefore, all the embeddings perform well on analytic languages. This evidence supports that morphological complexity is positively correlated with the quality of word embedding. Besides, for fusional languages, using distributed representation and context information would largely increase the performance. This, in turn, indicates that cases are a special semantic relations distributed in the words around the target noun. Although a case is not explicitly agreed with other components in an utterance, the word category might serve as a good indicator, such as preposition and verb. Animacy is a special nominal feature in a few languages, which is used to discriminate alive and hi cs bg sl el it es pt fi 0.9 0.95 1 Romance Slavic Accuracy (a) PERSON bg da sv no eu ro hu 0.9 0.95 1 Accuracy (b) DEFINITE cs pl 0.8 0.9 1 (c) ANIM. Figure 6: Model comparison on decoding PERSON, DEFINITENESS and ANIMACY. animate objects from inanimate nouns. Generally, it is based on the lexical semantic feature. As is shown in Figure 6, it is easier to decode animacy from the context-based representations than character-based representation. hi he cs pl bg hr sl da sv no en el fiet hu it es pt ro fa 0.8 0.9 1 Romance Slavic Accuracy CW SG AE (a) TENSE hi cs pl bg da no en el fi 0.85 0.9 0.95 1 Slavic Accuracy CW SG AE (b) VOICE cs pl bg sl da sv no en it es pt fi 0.9 0.95 1 Slavic Romance Accuracy CW SG AE (c) MOOD pl bg cs sl eu hi 0.8 0.9 1 Slavic Accuracy (d) ASPECT hi bg hr sl it es pt fi 0.6 0.8 1 Slavic Romance (e) PRONTYPE cs pl bg hr da no en it es pt fi et hu 0.8 0.9 1 Slavic Accuracy CW SG AE (f) VERBFORM Figure 7: Model comparison on TENSE, VOICE, MOOD, ASPECT, PRONTYPE and VERBFORM 1482 en ar de es fr hi id pt ru zh 0.2 0.4 0.6 0.8 Correlation CW SG AE Figure 8: Model comparison on EMOTION. From Figure 6, 7, we can see that all the three models give quite perfect performance on decoding person, definiteness, tense, voice, mood, aspect, pronoun type and verb form. Overall, character-based representation is most effective for Slavic languages on decoding verbal morphological features but not nominal features. The result is vice versa for Romance languages, which is not as morphologically complex as Slavic. It is worth noticing that models behave differently on Bulgarian, an analytic language, although Bulgarian belongs to Slavic language from the phylogenetic perspective. We think that this is because many morphological features in Bulgarian have been simplified or weakened. 4.4 Emotion Score In Experiment IV, we use the manually annotated data collected by Dodds et al. (2015). The data contains emotion scores for a list of words in several languages. In our experiment, the original score scale is transformed into the interval [0, 1]. A nonlinear map is trained to regress the representation of a word (CW, SG, AE) to its emotion score. To evaluate the predicted results, we measure the Spearman correlation between the gold scores and predicted scores. The result in Figure 8 reveals a significantly strong correlation between the predicted emotion scores of SG and the real emotion scores. CW comes the second. For AE, it is hard to decode emotion just from the word form. 5 Contrastive Analysis As we have mentioned before, Type I C&W model utilizes ordered context information to train the distributed word representation. Type II skip-gram model utilizes unordered context information. Type III character-based LSTM autoencoder model utilizes the grapheme information to represent a word. Towards the key questions that we raised at the very beginning of the paper, we propose our contrastive analysis Italian Spanish Portuguese Greek Bulgarian Danish Swedish Norwegian Hindi Polish Slovenian (a) Hierarchical tree based on model performances Italian Spanish Portuguese Swedish Danish Greek Bulgarian Slovenian Polish Hindi Norwegian (b) WALS Genus Tree Figure 9: Comparison of the tree based on model performances and the WALS dendrogram manually constructed by linguists. based on the experiment results. 5.1 Typology vs. Phylogeny Experiment results have shown that word embedding models are influenced by the syntactic and morphological diversity more or less. Here we display how typological similarity and phylogenetic relation is revealed from the observed model performance variation. We hierarchically cluster languages according to the model performance on decoding syntactic and morphological features. The dendrogram of the languages in Figure 9 vividly shows that most of the phylogeneticrelated languages are clustered together. However, there is some interesting exceptions. Bulgarian does not form a primary cluster with other Slavic languages (e.g. Slovenian). We think that this is because Bulgarian is typologically dissimilar to Slavic language family. Therefore, Figure 9 reflects that language typology explains the model variation better than language phylogeny. 5.2 Form vs. Context Here we discuss the effectiveness of word form and different types of word context. Regarding the correlation between context type and language function, previous results show that SG performs worse than CW on decoding POS and dependency relation while SG performs better than CW on decoding emotion score. Since CW keeps word order of the context, this comparison suggests that word order information is vital to syntactic information, but it might also be a kind of noise for the word vectors to encode semantic information. 1483 CW SG AE 0.8 0.85 0.9 0.95 Accuracy gen num cas ani def (a) Nominal features CW SG AE 0.8 0.9 Accuracy ten per moo voi asp pro vfo (b) Verbal features Figure 10: Overall performances of different models (averaged over languages) on decoding morphological features. Regarding the correlation between form and language function, previous results on POS, dependency relation and emotion scores show the effectiveness of the word context. However, for morphological features, results in Table 10 indicate that context-based word representation works slightly better than character-based representation. Specifically, character-based embedding (AE) does outperform context-based embedding (CW, SG) on decoding verbal morphological features, even though AE does not access any context information. In other words, word form could be an explicit and discriminative cue for the model to decode the morphological feature of a word. To prove that word form could provides informative and explicit cues for grammatical functions, we train another shuffled character-based word representation, which means that the autoencoder inputs shuffled letters and outputs the shuffled letters again. We use the hidden layer of the shuffled autoencoder as the representation for each word. The result in Table 4 shows that now the character-based model cannot perform as well as the original character-based autoencoder representation does, which again proves that the order of the word form is necessary for learning the grammatical function of a word. Since many languages share similar phonographic writing systems, we naturally want to know whether the grapheme-phoneme knowledge from one language can be transferred to another language. We train an autoencoder purely on Lan. Raw Shuf. Lan. Raw Shuf. Russian 0.906 0.671 Slovenian 0.800 0.653 Table 4: Comparison of original and shuffled character-based word representation on decoding POS tag. Source Language Arabic Finnish Target Language fa ud en shuf en rand Bigram type overlap. 0.176 0.761 0.891 0.864 0.648 Bigram token overlap. 0.689 0.881 0.999 0.993 0.650 Trigram type overlap. 0.523 0.522 0.665 0.449 0.078 Trigram token overlap. 0.526 0.585 0.978 0.796 0.078 Reconstruction Acc. 0.586 0.689 0.95 0.83 0.22 Table 5: Comparison of morpho-phonological knowledge transfer on different language pairs. The reconstruction accuracy is correlated with the overlapping proportion of grapheme patterns between source language and target language. Finnish and directly test the trained model on memorizing raw English words, letter-shuffled English words and random letter sequences. Results in Table 5 indicate that the character autoencoder can successfully reconstruct raw English words instead of the letter-shuffled English words or random letter sequences. However, if we train an autoencoder purely on Arabic and then directly test the trained model on memorizing Urdu (ud) words or Persian (fa) words, the reconstruction accuracy is quite low, although Arabic, Persian and Urdu use the same Arabic writing system. To explain the behaviour of AE, we calculate the correlation between the bigram character frequency in the words of the training language (e.g. Finnish) and the bigram character frequency in the words of the testing language (e.g. English). Table 5 reveals that phonological knowledge can be transferred if two languages share similar bigram and trigram character frequency distribution. For example, Finnish and English are both Indo-European language. Their writing system stores similar phonological structure. Arabic is a Semitic language. Persian is an Indo-European language. Their writing system stores different phonological structures respectively. This again proves that character-based LSTM autoencoder does ‘memorize’ the grapheme or phoneme clusters of a words. Morpho-phonological knowledge can be transferred among typologically-related languages. Additionally, we are surprised to find that using the English word representations encoded 1484 School of Computer Science Fudan University Shanghai, PRC 200433 0 20 4 0 6 0 0 .6 0 .8 64 Neurons of the Model Selectivity CW SG AE (a) Me-kan 0 20 4 0 6 0 0 .2 0 .4 0 .6 0 .8 1 64 Neurons of the Model Selectivity AE SG CW (b) -nya 0 20 4 0 6 0 0 .4 0 .6 0 .8 1 64 Neurons of the Model Selectivity AE SG CW (c) -indonesia 0 20 4 0 6 0 0 .4 0 .6 0 .8 1 64 Neurons of the Model Selectivity CW SG AE (d) country name 0 20 4 0 6 0 0 .4 0 .6 0 .8 64 Neurons of the Model Selectivity AE SG CW (e) verb − 1 − 0 .5 0 0 .5 0 0 .5 1 1 .5 2 Threshold t Activation (f) Neuron activation pattern ∗Student ID: 15210240086 1 (a) me-- kan 0 20 4 0 6 0 0 .6 64 Neurons of the Model Select AE (a) Me-kan 0 20 4 0 6 0 0 .2 0 .4 0 .6 64 Neurons of the Model Select CW (b) -nya 0 20 0 .4 0 .6 64 Neuro Select CW (c) -ind 0 20 4 0 6 0 0 .4 0 .6 0 .8 1 64 Neurons of the Model Selectivity CW SG AE (d) country name 0 20 4 0 6 0 0 .4 0 .6 0 .8 64 Neurons of the Model Selectivity AE SG CW (e) verb − 1 − 0 .5 0 0 .5 1 1 .5 2 Threshold t Ac (f) Neuron act ∗Student ID: 15210240086 1 (b) Country names 0 20 4 0 6 0 0 .6 64 Neurons of the Model Select AE (a) Me-kan 0 20 4 0 6 0 0 .2 0 .4 0 .6 64 Neurons of the Model Select CW (b) -nya 0 20 4 0 6 0 0 .4 0 .6 64 Neurons of the Model Select CW (c) -indonesia 0 20 4 0 6 0 0 .4 0 .6 0 .8 1 64 Neurons of the Model Selectivity CW SG AE (d) country name 0 20 4 0 6 0 0 .4 0 .6 0 .8 64 Neurons of the Model Selectivity AE SG CW (e) verb − 1 − 0 .5 0 0 .5 0 0 .5 1 1 .5 2 Threshold t Activation (f) Neuron activation pattern ∗Student ID: 15210240086 1 (c) Single neuron Figure 11: Visualising the Neuron activation pattern for different word embedding models by AE model trained on Finnish can increase the accuracy of English AE embedding in Experiment I (up to 0.7076), compared with the original accuracy 0.6587. This is probably due to the shared knowledge about the morphemes in the word form. 5.3 Neuronal Activation Pattern Le (2011) found out that it is empirically possible to learn a ‘Grandmother neuron’-like face detector from unlabelled data. In their experiment on unlabeled data, one neuron learns to activate specifically towards the pictures with cat faces instead of other pictures. Based on this finding, we hypothesize that there should exist selective neuron activation towards a linguistic feature trigger. The feature trigger can be a special consonant cluster, a specific suffix or the syntactic category of a word. To quantitatively show the collective neuron behaviours and the individual neuron response towards different linguistic trigger, we compute the maximum probability that a neuron discriminates the words with trigger f from the words without trigger f. We defined this probability as the Degree of Selectivity p. For a given neuron n in a given model M towards linguistic trigger f, we try to find a threshold t that maximizes pf,t, cf,t = N+ f,t Nf , c¬f,t = N+ ¬f,t N¬f , Selectivity = pf,t = 2 × cf,t × c¬f,t cf,t + c¬f,t where N+ f,t is the number of correctly discriminated words with linguistic feature f based on the threshold t. Nf is the real number of words with linguistic feature f. N+ ¬f,t is the number of correctly discriminated words without linguistic feature f based on the threshold t. N¬f means the real number of words without linguistic feature f. cf,t / c¬f,t is the accuracy for the neuron n of model M to detect the existence / nonexistence of the linguistic feature f. pf,t is the F-score of cf,t and c¬f,t, indicating the degree to which a certain neuron discriminates the words with/without a certain trigger f at a certain threshold t. After calculating the selectivity of 64 neurons in an embedding model towards a linguistic trigger f, we sort the neurons according to the value of selectivity and draw the curve in Figure 11 for each model. The x-axis is the rank of the model neurons based on their selectivity towards a certain linguistic trigger. The y-axis is the selectivity of the corresponding neuron. The curve can tell us how many neurons selectively respond to trigger f to a certain degree. For example, we can see from Figure 11 that the max selectivity of the AE neurons reaches nearly 0.9. This means that one neuron of the AE model is especially sensitive to the prefix ‘Me-’ and affix ‘-an’. It can detect the words with the prefix ‘Me-’ and the affix ‘-an’ just from its activation pattern. It is also interesting to see from Figure 11 that neurons of AE respond more selectively to morphological triggers than those of the wordbased model. For example, almost 30% of the AE neurons fall in the selectivity level [0.7, 1] towards the verb marker, namely prefix ‘Me-’ and affix ‘-an’, in Indonesian. Context-based model also shows some selectivity towards this morphological triggers. For SG model, the max selectivity of the model neurons is only just above 0.7. On the contrary, the context-based distributed models showed strong selective activation towards country names in Indonesian. However, the selectivity of all the AE neurons is below 0.7 towards these semantically-related words. Similar patterns are found also in other languages. We conclude that the character-based model captures much morphological information 1485 / syntactic marker than semantic information. The popular word-based model captures both semantic information and syntactic information, although the latter is not displayed as explicitly as the former. 6 Related works There have been a lot of research on interpreting or relating word embedding with linguistic features. Yogatama et al. (2014) projects word embedding into a sparse vector. They found some linguistically interpretable dimensions. Faruqui and Dyer (2015) use linguistic features to build word vector. Their results show that these representation of word meaning can also achieve good performance in the analogy and similarity tasks. These work can be regarded as the foreshadowing of our experiment paradigm that mapping dense vector to a sparse linguistic property space. Besides, a lot of study focus on empirical comparison of different word embedding model. Melamud et al. (2016) investigates the influence of context type and vector dimension on word embedding. Their main finding is that concatenating two different types of embeddings can still improve performance even if the utility of dimensionality has run out. Andreas and Klein (2014) assess the potential syntactic information encoded in word embeddings by directly apply word embeddings to parser and they concluded that embeddings add redundant information to what the conventional parser has already extracted. Tsvetkov et al. (2015) propose a method to evaluate word embeddings through the alignment of distributional vectors and linguistic word vectors. However, the method still lacks a direct and comprehensive investigation of the utility of form, context and language typological diversity. This is exactly our novelty and contribution. It is worth noticing that K¨ohn (2015) evaluates multilingual word embedding and compares skip-gram, language model and other competitive embedding models. They show that dependencybased skip-gram embedding is effective, even at low dimension. Although K¨ohn (2015) work involves different languages, they focus on the similarity among multilingual embeddings with only 7 languages. Our work, however, not only provides a comprehensive investigation with massive language samples (30 for Experiment I) and nonlinear mapping models, but also reveal the utility of pure word form and novelly point out the cross-language differences in word representation, which have been overlooked by huge amount of monolingual/bilingual research on well-studied languages. 7 Conclusion In this paper, we quantify the utility of word form and the effect of language typological diversity in learning word representations. Crosslanguage perspective and novel analysis of neuron behaviours provide us with new evidence about the typological universal and specific revealed in word embedding. We summarize from our experiments on a massive set of languages that: • Language typological diversity, especially the specific word order type and morphological complexity, does influence how linguistic information is encoded in word embedding. • It is plausible (and sometimes even better) to decode grammatical function just from the word form, for certain inflectional languages. • Quantification of neuron activation pattern reveals different characteristics of the context-based model and the character-based counterpart. Therefore, we think that it is necessary to maximize both the utility of word form and the advantage of the context for a better word representation. It would also be a promising direction to incorporate the factor of language typological diversity when designing advanced word representation model for languages other than English. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. This work was partially funded by National Natural Science Foundation of China (No. 61532011, 61473092, and 61472088), the National High Technology Research and Development Program of China (No. 2015AA015408). References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the 1486 Seventeenth Conference on Computational Natural Language Learning, pages 183–192, Sofia, Bulgaria, August. Association for Computational Linguistics. Jacob Andreas and Dan Klein. 2014. How much do word embeddings encode about syntax? In Proceedings of ACL, pages 822–827. Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2015. Improved transition-based parsing by modeling characters instead of words with LSTMs. In Proceedings of EMNLP. Emily M Bender. 2009. Linguistically na¨ıve!= language independent: why nlp needs linguistic typology. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?, pages 26–32. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. William Croft. 2002. Typology and universals. Cambridge University Press. Marie-Catherine De Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D Manning. 2014. Universal stanford dependencies: A cross-linguistic typology. In LREC, volume 14, pages 4585–4592. Peter Sheridan Dodds, Eric M Clark, Suma Desu, Morgan R Frank, Andrew J Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M Kloumann, James P Bagrow, et al. 2015. Human language reveals a universal positivity bias. Proceedings of the National Academy of Sciences, 112(8):2389–2394. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology. http://wals. info/. Manaal Faruqui and Chris Dyer. 2015. Nondistributional word vector representations. In Proceedings of ACL. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of NAACL. Zellig S. Harris. 1954. Distributional structure. Synthese Language Library, 10:146–162. Maria Jesus Aranzabe Masayuki Asahara Aitziber Atutxa Miguel Ballesteros John Bauer Kepa Bengoetxea Riyaz Ahmad Bhat Cristina Bosco Sam Bowman Giuseppe G. A. Celano Miriam Connor Marie-Catherine de Marneffe Arantza Diaz de Ilarraza Kaja Dobrovoljc Timothy Dozat Tomaˇz Erjavec Rich´ard Farkas Jennifer Foster Daniel Galbraith Filip Ginter Iakes Goenaga Koldo Gojenola Yoav Goldberg Berta Gonzales Bruno Guillaume Jan Hajiˇc Dag Haug Radu Ion Elena Irimia Anders Johannsen Hiroshi Kanayama Jenna Kanerva Simon Krek Veronika Laippala Alessandro Lenci Nikola Ljubeˇsi´c Teresa Lynn Christopher Manning Ctlina Mrnduc David Mareˇcek H´ector Mart´ınez Alonso Jan Maˇsek Yuji Matsumoto Ryan McDonald Anna Missil¨a Verginica Mititelu Yusuke Miyao Simonetta Montemagni Shunsuke Mori Hanna Nurmi Petya Osenova Lilja Øvrelid Elena Pascual Marco Passarotti Cenel-Augusto Perez Slav Petrov Jussi Piitulainen Barbara Plank Martin Popel Prokopis Prokopidis Sampo Pyysalo Loganathan Ramasamy Rudolf Rosa Shadi Saleh Sebastian Schuster Wolfgang Seeker Mojgan Seraji Natalia Silveira Maria Simi Radu Simionescu Katalin Simk´o Kiril Simov Aaron Smith Jan ˇStˇep´anek Alane Suhr Zsolt Sz´ant´o Takaaki Tanaka Reut Tsarfaty Sumire Uematsu Larraitz Uria Viktor Varga Veronika Vincze Zdenˇek ˇZabokrtsk´y Daniel Zeman Joakim Nivre, ˇZeljko Agi´c and Hanzhi Zhu. 2015. Universal dependencies 1.2. In LINDAT/CLARIN digital library at Institute of Formal and Applied Linguistics, Charles University in Prague. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Proceedings of AAAI. Arne K¨ohn. 2015. Whats in an embedding? analyzing word embeddings through multilingual evaluation. In Poceedings of EMNLP. Siwei Lai, Kang Liu, Liheng Xu, and Jun Zhao. 2015. How to generate a good word embedding? arXiv preprint arXiv:1507.05523. Q. V. Le. 2011. Building high-level features using large scale unsupervised learning. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8595 – 8598. Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of EMNLP. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015b. Character-based neural machine translation. arXiv preprint arXiv:1511.04586. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of NAACL. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of ICLR. 1487 Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1818–1826. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of EMNLP. Nianwen Xue, Zixin Jiang, Xiuhong Zhong, Martha Palmer, Fei Xia, Fu-Dong Chiou, and Meiyu Chang. 2010. Chinese treebank 7.0. Linguistic Data Consortium, Philadelphia. Dani Yogatama, Manaal Faruqui, Chris Dyer, and Noah A Smith. 2014. Learning word representations with hierarchical sparse coding. In Proceedings of ICML. 1488
2016
140
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1489–1501, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change William L. Hamilton, Jure Leskovec, Dan Jurafsky Department of Computer Science, Stanford University, Stanford CA, 94305 wleif,jure,[email protected] Abstract Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test. Word embeddings show promise as a diachronic tool, but have not been carefully evaluated. We develop a robust methodology for quantifying semantic change by evaluating word embeddings (PPMI, SVD, word2vec) against known historical changes. We then use this methodology to reveal statistical laws of semantic evolution. Using six historical corpora spanning four languages and two centuries, we propose two quantitative laws of semantic change: (i) the law of conformity—the rate of semantic change scales with an inverse power-law of word frequency; (ii) the law of innovation—independent of frequency, words that are more polysemous have higher rates of semantic change. 1 Introduction Shifts in word meaning exhibit systematic regularities (Br´eal, 1897; Ullmann, 1962). The rate of semantic change, for example, is higher in some words than others (Blank, 1999) — compare the stable semantic history of cat (from ProtoGermanic kattuz, “cat”) to the varied meanings of English cast: “to mould”, “a collection of actors’, “a hardened bandage”, etc. (all from Old Norse kasta, “to throw”, Simpson et al., 1989). Various hypotheses have been offered about such regularities in semantic change, such as an increasing subjectification of meaning, or the grammaticalization of inferences (e.g., Geeraerts, 1997; Blank, 1999; Traugott and Dasher, 2001). But many core questions about semantic change remain unanswered. One is the role of frequency. Frequency plays a key role in other linguistic changes, associated sometimes with faster change—sound changes like lenition occur in more frequent words—and sometimes with slower change—high frequency words are more resistant to morphological regularization (Bybee, 2007; Pagel et al., 2007; Lieberman et al., 2007). What is the role of word frequency in meaning change? Another unanswered question is the relationship between semantic change and polysemy. Words gain senses over time as they semantically drift (Br´eal, 1897; Wilkins, 1993; Hopper and Traugott, 2003), and polysemous words1 occur in more diverse contexts, affecting lexical access speed (Adelman et al., 2006) and rates of L2 learning (Crossley et al., 2010). But we don’t know whether the diverse contextual use of polysemous words makes them more or less likely to undergo change (Geeraerts, 1997; Winter et al., 2014; Xu et al., 2015). Furthermore, polysemy is strongly correlated with frequency—high frequency words have more senses (Zipf, 1945; ˙Ilgen and Karaoglan, 2007)—so understanding how polysemy relates to semantic change requires controling for word frequency. Answering these questions requires new methods that can go beyond the case-studies of a few words (often followed over widely different timeperiods) that are our most common diachronic data (Br´eal, 1897; Ullmann, 1962; Blank, 1999; Hopper and Traugott, 2003; Traugott and Dasher, 2001). One promising avenue is the use of distributional semantics, in which words are embedded in vector spaces according to their co-occurrence relationships (Bullinaria and Levy, 2007; Turney and Pantel, 2010), and the embeddings of words 1We use ‘polysemy’ here to refer to related senses as well as rarer cases of accidental homonymy. 1489 Figure 1: Two-dimensional visualization of semantic change in English using SGNS vectors.2 a, The word gay shifted from meaning “cheerful” or “frolicsome” to referring to homosexuality. b, In the early 20th century broadcast referred to “casting out seeds”; with the rise of television and radio its meaning shifted to “transmitting signals”. c, Awful underwent a process of pejoration, as it shifted from meaning “full of awe” to meaning “terrible or appalling” (Simpson et al., 1989). are then compared across time-periods. This new direction has been effectively demonstrated in a number of case-studies (Sagi et al., 2011; Wijaya and Yeniterzi, 2011; Gulordava and Baroni, 2011; Jatowt and Duh, 2014) and used to perform largescale linguistic change-point detection (Kulkarni et al., 2014) as well as to test a few specific hypotheses, such as whether English synonyms tend to change meaning in similar ways (Xu and Kemp, 2015). However, these works employ widely different embedding approaches and test their approaches only on English. In this work, we develop a robust methodology for quantifying semantic change using embeddings by comparing state-of-the-art approaches (PPMI, SVD, word2vec) on novel benchmarks. We then apply this methodology in a large-scale cross-linguistic analysis using 6 corpora spanning 200 years and 4 languages (English, German, French, and Chinese). Based on this analysis, we propose two statistical laws relating frequency and polysemy to semantic change: • The law of conformity: Rates of semantic change scale with a negative power of word frequency. • The law of innovation: After controlling for frequency, polysemous words have significantly higher rates of semantic change. 2 Diachronic embedding methods The following sections outline how we construct diachronic (historical) word embeddings, by first constructing embeddings in each time-period and then aligning them over time, and the metrics that 2Appendix B details the visualization method. we use to quantify semantic change. All of the learned embeddings and the code we used to analyze them are made publicly available.3 2.1 Embedding algorithms We use three methods to construct word embeddings within each time-period: PPMI, SVD, and SGNS (i.e., word2vec).4 These distributional methods represent each word wi by a vector wi that captures information about its co-occurrence statistics. These methods operationalize the ‘distributional hypothesis’ that word semantics are implicit in co-occurrence relationships (Harris, 1954; Firth, 1957). The semantic similarity/distance between two words is approximated by the cosine similarity/distance between their vectors (Turney and Pantel, 2010). 2.1.1 PPMI In the PPMI representations, the vector embedding for word wi ∈V contains the positive point-wise mutual information (PPMI) values between wi and a large set of pre-specified ‘context’ words. The word vectors correspond to the rows of the matrix MPPMI ∈R|V|×|VC| with entries given by MPPMI i,j = max  log  ˆp(wi, cj) ˆp(w)ˆp(cj)  −α, 0  , (1) where cj ∈VC is a context word and α > 0 is a negative prior, which provides a smoothing bias (Levy et al., 2015). The ˆp correspond to the smoothed empirical probabilities of word 3http://nlp.stanford.edu/projects/histwords 4Synchronic applications of these three methods are reviewed in detail in Levy et al. (2015). 1490 Name Language Description Tokens Years POS Source ENGALL English Google books (all genres) 8.5 × 1011 1800-1999 (Davies, 2010) ENGFIC English Fiction from Google books 7.5 × 1010 1800-1999 (Davies, 2010) COHA English Genre-balanced sample 4.1 × 108 1810-2009 (Davies, 2010) FREALL French Google books (all genres) 1.9 × 1011 1800-1999 (Sagot et al., 2006) GERALL German Google books (all genres) 4.3 × 1010 1800-1999 (Schneider and Volk, 1998) CHIALL Chinese Google books (all genres) 6.0 × 1010 1950-1999 (Xue et al., 2005) Table 1: Six large historical datasets from various languages and sources are used. (co-)occurrences within fixed-size sliding windows of text. Clipping the PPMI values above zero ensures they remain finite and has been shown to dramatically improve results (Bullinaria and Levy, 2007; Levy et al., 2015); intuitively, this clipping ensures that the representations emphasize positive word-word correlations over negative ones. 2.1.2 SVD SVD embeddings correspond to low-dimensional approximations of the PPMI embeddings learned via singular value decomposition (Levy et al., 2015). The vector embedding for word wi is given by wSVD i = (UΣγ)i , (2) where MPPMI = UΣV⊤is the truncated singular value decomposition of MPPMI and γ ∈[0, 1] is an eigenvalue weighting parameter. Setting γ < 1 has been shown to dramatically improve embedding qualities (Turney and Pantel, 2010; Bullinaria and Levy, 2012). This SVD approach can be viewed as a generalization of Latent Semantic Analysis (Landauer and Dumais, 1997), where the term-document matrix is replaced with MPPMI. Compared to PPMI, SVD representations can be more robust, as the dimensionality reduction acts as a form of regularization. 2.1.3 Skip-gram with negative sampling SGNS ‘neural’ embeddings are optimized to predict co-occurrence relationships using an approximate objective known as ‘skip-gram with negative sampling’ (Mikolov et al., 2013). In SGNS, each word wi is represented by two dense, lowdimensional vectors: a word vector (wSGNS i ) and context vector (cSGNS i ). These embeddings are optimized via stochastic gradient descent so that ˆp(ci|wi) ∝exp(wSGNS i · cSGNS j ), (3) where p(ci|wi) is the empirical probability of seeing context word ci within a fixed-length window of text, given that this window contains wi. The SGNS optimization avoids computing the normalizing constant in (3) by randomly drawing ‘negative’ context words, cn, for each target word and ensuring that exp(wSGNS i ·cSGNS n ) is small for these examples. SGNS has the benefit of allowing incremental initialization during learning, where the embeddings for time t are initialized with the embeddings from time t −∆(Kim et al., 2014). We employ this trick here, though we found that it had a negligible impact on our results. 2.2 Datasets, pre-processing, and hyperparameters We trained models on the 6 datasets described in Table 1, taken from Google N-Grams (Lin et al., 2012) and the COHA corpus (Davies, 2010). The Google N-Gram datasets are extremely large (comprising ≈6% of all books ever published), but they also contain many corpus artifacts due, e.g., to shifting sampling biases over time (Pechenick et al., 2015). In contrast, the COHA corpus was carefully selected to be genre-balanced and representative of American English over the last 200 years, though as a result it is two orders of magnitude smaller. The COHA corpus also contains pre-extracted word lemmas, which we used to validate that our results hold at both the lemma and raw token levels. All the datasets were aggregated to the granularity of decades.5 We follow the recommendations of Levy et al. (2015) in setting the hyperparameters for the embedding methods, though preliminary experiments were used to tune key settings. For all methods, we used symmetric context windows of size 4 (on each side). For SGNS and SVD, we use embeddings of size 300. See Appendix A for further implementation and pre-processing details. 5The 2000s decade of the Google data was discarded due to shifts in the sampling methodology (Michel et al., 2011). 1491 2.3 Aligning historical embeddings In order to compare word vectors from different time-periods we must ensure that the vectors are aligned to the same coordinate axes. Explicit PPMI vectors are naturally aligned, as each column simply corresponds to a context word. Low-dimensional embeddings will not be naturally aligned due to the non-unique nature of the SVD and the stochastic nature of SGNS. In particular, both these methods may result in arbitrary orthogonal transformations, which do not affect pairwise cosine-similarities within-years but will preclude comparison of the same word across time. Previous work circumvented this problem by either avoiding low-dimensional embeddings (e.g., Gulordava and Baroni, 2011; Jatowt and Duh, 2014) or by performing heuristic local alignments per word (Kulkarni et al., 2014). We use orthogonal Procrustes to align the learned low-dimensional embeddings. Defining W(t) ∈Rd×|V| as the matrix of word embeddings learned at year t, we align across time-periods while preserving cosine similarities by optimizing: R(t) = arg min Q⊤Q=I ∥W(t)Q −W(t+1)∥F , (4) with R(t) ∈Rd×d. The solution corresponds to the best rotational alignment and can be obtained efficiently using an application of SVD (Sch¨onemann, 1966). 2.4 Time-series from historical embeddings Diachronic word embeddings can be used in two ways to quantify semantic change: (i) we can measure changes in pair-wise word similarities over time, or (ii) we can measure how an individual word’s embedding shifts over time. Pair-wise similarity time-series Measuring how the cosine-similarity between pairs of words changes over time allows us to test hypotheses about specific linguistic or cultural shifts in a controlled manner. We quantify shifts by computing the similarity time-series s(t)(wi, wj) = cos-sim(w(t) i , w(t) j ) (5) between two words wi and wj over a time-period (t, ..., t + ∆). We then measure the Spearman correlation (ρ) of this series against time, which allows us to assess the magnitude and significance of pairwise similarity shifts; since the Spearman correlation is non-parametric, this measure essentially detects whether the similarity series increased/decreased over time in a significant manner, regardless of the ‘shape’ of this curve.6 Measuring semantic displacement After aligning the embeddings for individual timeperiods, we can use the aligned word vectors to compute the semantic displacement that a word has undergone during a certain time-period. In particular, we can directly compute the cosinedistance between a word’s representation for different time-periods, i.e. cos-dist(wt, wt+∆), as a measure of semantic change. We can also use this measure to quantify ‘rates’ of semantic change for different words by looking at the displacement between consecutive time-points. 3 Comparison of different approaches We compare the different distributional approaches on a set of benchmarks designed to test their scientific utility. We evaluate both their synchronic accuracy (i.e., ability to capture word similarity within individual time-periods) and their diachronic validity (i.e., ability to quantify semantic changes over time). 3.1 Synchronic Accuracy We evaluated the synchronic (within-time-period) accuracy of the methods using a standard modern benchmark and the 1990s portion of the ENGALL data. On Bruni et al. (2012)’s MEN similarity task of matching human judgments of word similarities, SVD performed best (ρ = 0.739), followed by PPMI (ρ = 0.687) and SGNS (ρ = 0.649). These results echo the findings of Levy et al. (2015), who found SVD to perform best on similarity tasks while SGNS performed best on analogy tasks (which are not the focus of this work). 3.2 Diachronic Validity We evaluate the diachronic validity of the methods on two historical semantic tasks: detecting known shifts and discovering shifts from data. For both these tasks, we performed detailed evaluations on a small set of examples (28 known shifts and the top-10 “discovered” shifts by each method). Using these reasonably-sized evaluation sets allowed the authors to evaluate each case rigorously using existing literature and historical corpora. 6Other metrics or change-point detection approaches, e.g. mean shifts (Kulkarni et al., 2014) could also be used. 1492 Word Moving towards Moving away Shift start Source gay homosexual, lesbian happy, showy ca 1920 (Kulkarni et al., 2014) fatal illness, lethal fate, inevitable <1800 (Jatowt and Duh, 2014) awful disgusting, mess impressive, majestic <1800 (Simpson et al., 1989) nice pleasant, lovely refined, dainty ca 1900 (Wijaya and Yeniterzi, 2011) broadcast transmit, radio scatter, seed ca 1920 (Jeffers and Lehiste, 1979) monitor display, screen — ca 1930 (Simpson et al., 1989) record tape, album — ca 1920 (Kulkarni et al., 2014) guy fellow, man — ca 1850 (Wijaya and Yeniterzi, 2011) call phone, message — ca 1890 (Simpson et al., 1989) Table 2: Set of attested historical shifts used to evaluate the methods. The examples are taken from previous works on semantic change and from the Oxford English Dictionary (OED), e.g. using ‘obsolete’ tags. The shift start points were estimated using attestation dates in the OED. The first six examples are words that shifted dramatically in meaning while the remaining four are words that acquired new meanings (while potentially also keeping their old ones). Method Corpus % Correct %Sig. PPMI ENGALL 96.9 84.4 COHA 100.0 88.0 SVD ENGALL 100.0 90.6 COHA 100.0 96.0 SGNS ENGALL 100.0 93.8 COHA 100.0 72.0 Table 3: Performance on detection task, i.e. ability to capture the attested shifts from Table 2. SGNS and SVD capture the correct directionality of the shifts in all cases (%Correct), e.g., gay becomes more similar to homosexual, but there are differences in whether the methods deem the shifts to be statistically significant at the p < 0.05 level (%Sig). Detecting known shifts. First, we tested whether the methods capture known historical shifts in meaning. The goal in this task is for the methods to correctly capture whether pairs of words moved closer or further apart in semantic space during a pre-determined time-period. We use a set of independently attested shifts as an evaluation set (Table 2). For comparison, we evaluated the methods on both the large (but messy) ENGALL data and the smaller (but clean) COHA data. On this task, all the methods performed almost perfectly in terms of capturing the correct directionality of the shifts (i.e., the pairwise similarity series have the correct sign on their Spearman correlation with time), but there were some differences in whether the methods deemed the shifts statistically significant at the p < 0.05 level.7 Overall, SGNS performed the best on the full English data, but its performance dropped significantly on the smaller COHA dataset, where SVD performed best. PPMI was noticeably worse than the other two approaches (Table 3). Discovering shifts from data. We tested whether the methods discover reasonable shifts 7All subsequent significance tests are at p < 0.05. by examining the top-10 words that changed the most from the 1900s to the 1990s according to the semantic displacement metric introduced in Section 2.4 (limiting our analysis to words with relative frequencies above 10−5 in both decades). We used the ENGFIC data as the most-changed list for ENGALL was dominated by scientific terms due to changes in the corpus sample. Table 4 shows the top-10 words discovered by each method. These shifts were judged by the authors as being either clearly genuine, borderline, or clearly corpus artifacts. SGNS performed by far the best on this task, with 70% of its top-10 list corresponding to genuine semantic shifts, followed by 40% for SVD, and 10% for PPMI. However, a large portion of the discovered words for PPMI (and less so SVD) correspond to borderline cases, e.g. know, that have not necessarily shifted significantly in meaning but that occur in different contexts due to global genre/discourse shifts. The poor quality of the nearest neighbors generated by the PPMI algorithm—which are skewed by PPMI’s sensitivity to rare events—also made it difficult to assess the quality of its discovered shifts. SVD was the most sensitive to corpus artifacts (e.g., co-occurrences due to cover pages and advertisements), but it still captured a number of genuine semantic shifts. We opted for this small evaluation set and relied on detailed expert judgments to minimize ambiguity; each potential shift was analyzed in detail by consulting consulting existing literature (especially the OED; Simpson et al., 1989) and all disagreements were discussed. Table 5 details representative example shifts in English, French, and German. Chinese lacks sufficient historical data for this task, as only years 1950-1999 are usable; however, we do still see 1493 Method Top-10 words that changed from 1900s to 1990s PPMI know, got, would, decided, think, stop, remember, started, must, wanted SVD harry, headed, calls, gay, wherever, male, actually, special, cover, naturally SGNS wanting, gay, check, starting, major, actually, touching, harry, headed, romance Table 4: Top-10 English words with the highest semantic displacement values between the 1900s and 1990s. Bolded entries correspond to real semantic shifts, as deemed by examining the literature and their nearest neighbors; for example, headed shifted from primarily referring to the “top of a body/entity” to referring to “a direction of travel.” Underlined entries are borderline cases that are largely due to global genre/discourse shifts; for example, male has not changed in meaning, but its usage in discussions of “gender equality” is relatively new. Finally, unmarked entries are clear corpus artifacts; for example, special, cover, and romance are artifacts from the covers of fiction books occasionally including advertisements etc. Word Language Nearest-neighbors in 1900s Nearest-neighbors in 1990s wanting English lacking, deficient, lacked, lack, needed wanted, something, wishing, anything, anybody asile French refuge, asiles, hospice, vieillards, infirmerie demandeurs, refuge, hospice, visas, admission widerstand German scheiterte, volt, stromst¨arke, leisten, brechen opposition, verfolgung, nationalsozialistische, nationalsozialismus, kollaboration Table 5: Example words that changed dramatically in meaning in three languages, discovered using SGNS embeddings. The examples were selected from the top-10 most-changed lists between 1900s and 1990s as in Table 4. In English, wanting underwent subjectification and shifted from meaning “lacking” to referring to subjective ”desire”, as in “the education system is wanting” (1900s) vs. ”I’ve been wanting to tell you” (1990s). In French asile (“asylum”) shifted from primarily referring to “hospitals, or infirmaries” to also referring to “asylum seekers, or refugees”. Finally, in German Widerstand (“resistance”) gained a formal meaning as referring to the local German resistance to Nazism during World War II. some significant changes for Chinese in this short time-period, such as 病毒(“virus”) moving closer to 电脑(“computer”, ρ = 0.89). 3.3 Methodological recommendations PPMI is clearly worse than the other two methods; it performs poorly on all the benchmark tasks, is extremely sensitive to rare events, and is prone to false discoveries from global genre shifts. Between SVD and SGNS the results are somewhat equivocal, as both perform best on two out of the four tasks (synchronic accuracy, ENGALL detection, COHA detection, discovery). Overall, SVD performs best on the synchronic accuracy task and has higher average accuracy on the ‘detection’ task, while SGNS performs best on the ‘discovery’ task. These results suggest that both these methods are reasonable choices for studies of semantic change but that they each have their own tradeoffs: SVD is more sensitive, as it performs well on detection tasks even when using a small dataset, but this sensitivity also results in false discoveries due to corpus artifacts. In contrast, SGNS is robust to corpus artifacts in the discovery task, but it is not sensitive enough to perform well on the detection task with a small dataset. Qualitatively, we found SGNS to be most useful for discovering new shifts and visualizing changes (e.g., Figure 1), while SVD was most effective for detecting subtle shifts in usage. 4 Statistical laws of semantic change We now show how diachronic embeddings can be used in a large-scale cross-linguistic analysis to reveal statistical laws that relate frequency and polysemy to semantic change. In particular, we analyze how a word’s rate of semantic change, ∆(t)(wi) = cos-dist(w(t) i , w(t+1) i ) (6) depends on its frequency, f(t)(wi) and a measure of its polysemy, d(t)(wi) (defined in Section 4.4). 4.1 Setup We present results using SVD embeddings (though analogous results were found to hold with SGNS). Using all four languages and all four conditions for English (ENGALL, ENGFIC, and COHA with and without lemmatization), we performed regression analysis on rates of semantic change, ∆(t)(wi); thus, we examined one data-point per word for each pair of consecutive decades and analyzed how a word’s frequency and polysemy at time t correlate with its degree of semantic displacement over the next decade. To ensure the robustness of our results, we analyzed only the top-10000 non–stop words by aver1494 Top-10 most polysemous yet, always, even, little, called, also, sometimes, great, still, quite Top-10 least polysemous photocopying, retrieval, thirties, mom, sweater, forties, seventeenth, fifteenth, holster, postage Table 6: The top-10 most and least polysemous words in the ENGFIC data. Words like yet, even, and still are used in many diverse ways and are highly polysemous. In contrast, words like photocopying, postage, and holster tend to be used in very specific well-clustered contexts, corresponding to a single sense; for example, mail and letter are both very likely to occur in the context of postage and are also likely to co-occur with each other, independent of postage. a b Figure 2: Higher frequency words have lower rates of change (a), while polysemous words have higher rates of change (b). The negative curvature for polysemy—which is significant only at high d(wi)—varies across datasets and was not present with SGNS, so it is not as robust as the clear linear trend that was seen with all methods and across all datasets. The trendlines show 95% CIs from bootstrapped kernel regressions on the ENGALL data (Li and Racine, 2007). age historical frequency (lower-frequency words tend to lack sufficient co-occurrence data across years) and we discarded proper nouns (changes in proper noun usage are primarily driven by nonlinguistic factors, e.g. historical events, Traugott and Dasher, 2001). We also log-transformed the semantic displacement scores and normalized the scores to have zero mean and unit variance; we denote these normalized scores by ˜∆(t)(wi). We performed our analysis using a linear mixed model with random intercepts per word and fixed effects per decade; i.e., we fit βf, βd, and βt s.t. ˜∆(t)(wi) = βf log  f(t)(wi)  +βd log  d(t)(wi)  + βt + zwi + ϵ(t) wi ∀wi ∈V, t ∈{t0, ..., tn}, (7) where zwi ∼N(0, σwi) is the random intercept for word wi and ϵ(t) wi ∈N(0, σ) is an error term. βf, βd and βt correspond to the fixed effects for frequency, polysemy and the decade t, respectively8. Intuitively, this model estimates the effects of frequency and polysemy on semantic change, while controlling for temporal trends and correcting for the fact that measurements on same word will be correlated across time. We fit (7) using the standard restricted maximum likelihood algorithm (McCulloch and Neuhaus, 2001; Appendix C). 8Note that time is treated as a categorical variable, as each decade has its own fixed effect. 4.2 Overview of results We find that, across languages, rates of semantic change obey a scaling relation of the form ∆(wi) ∝f(wi)βf × d(wi)βd, (8) with βf < 0 and βd > 0. This finding implies that frequent words change at slower rates while polysemous words change faster, and that both these relations scale as power laws. 4.3 Law of conformity: Frequently used words change at slower rates Using the model in equation (7), we found that the logarithm of a word’s frequency, log(f(wi)), has a significant and substantial negative effect on rates of semantic change in all settings (Figures 2a and 3a). Given the use of log-transforms in preprocessing the data this implies rates of semantic change are proportional to a negative power (βf) of frequency, i.e. ∆(wi) ∝f(wi)βf , (9) with βf ∈ [−1.26, −0.27] across languages/datasets. The relatively large range of values for βf is due to the fact that the COHA datasets are outliers due to their substantially smaller sample sizes (Figure 3; the range is βf ∈[−0.66, −0.27] with COHA excluded). 1495 Figure 3: a, The estimated linear effect of log-frequency (ˆβf) is significantly negative across all languages. The effect is significantly stronger in the COHA data, but this is likely due to its small sample size (∼100× smaller than the other datasets); the small sample size introduces random variance that may artificially inflate the effect of frequency. From the COHA data, we also see that the result holds regardless of whether lemmatization is used. b, Analogous trends hold for the linear effect of the polysemy score (ˆβd), which is strong and significantly positive across all conditions. Again, we see that the smaller COHA datasets are mild outliers.9 95% CIs are shown. 4.4 Law of innovation: Polysemous words change at faster rates There is a common hypothesis in the linguistic literature that “words become semantically extended by being used in diverse contexts” (Winter et al., 2014), an idea that dates back to the writings of Br´eal (1897). We tested this notion by examining the relationship between polysemy and semantic change in our data. Quantifying polysemy Measuring word polysemy is a difficult and fraught task, as even “ground truth” dictionaries differ in the number of senses they assign to words (Simpson et al., 1989; Fellbaum, 1998). We circumvent this issue by measuring a word’s contextual diversity as a proxy for its polysemousness. The intuition behind our measure is that words that occur in many distinct, unrelated contexts will tend to be highly polysemous. This view of polysemy also fits with previous work on semantic change, which emphasizes the role of contextual diversity (Br´eal, 1897; Winter et al., 2014). We measure a word’s contextual diversity, and thus polysemy, by examining its neighborhood in an empirical co-occurrence network. We construct empirical co-occurrence networks using the PPMI measure defined in Section 2. In these networks words are connected to each other if they co-occur more than one would expect by chance (after smoothing). The polysemy of a word is then measured as its local clustering coefficient within 9The COHA data is ∼100× smaller, which has a global effect on the construction of the co-occurrence network (e.g., lower average degree) used to compute polysemy scores. this network (Watts and Strogatz, 1998): d(wi) = − P ci,cj∈NPPMI(wi) I {PPMI(ci, cj) > 0} |NPPMI(wi)|(|NPPMI(wi)| −1) , (10) where NPPMI(wi) = {wj : PPMI(wi, wj) > 0}. This measure counts the proportion of wi’s neighbors that are also neighbors of each other. According to this measure, a word will have a high clustering coefficient (and thus a low polysemy score) if the words that it co-occurs with also tend to cooccur with each other. Polysemous words that are contextually diverse will have low clustering coefficients, since they appear in disjointed or unrelated contexts. Variants of this measure are often used in wordsense discrimination and correlate with, e.g., number of senses in WordNet (Dorow and Widdows, 2003; Ferret, 2004). However, we found that it was slightly biased towards rating contextually diverse discourse function words (e.g., also) as highly polysemous, which needs to be taken into account when interpreting our results. We opted to use this measure, despite this bias, because it has the strong benefit of being clearly interpretable: it simply measures the extent to which a word appears in diverse textual contexts. Table 6 gives examples of the least and most polysemous words in the ENGFIC data, according to this score. As expected, this measure has significant intrinsic positive correlation with frequency. Across datasets, we found Pearson correlations in the range 0.45 < r < 0.8 (all p < 0.05), confirming frequent words tend to be used in a greater diversity of contexts. As a consequence of this high correlation, we interpret the effect of this measure only after controlling for frequency (this control is naturally captured in equation (7)). 1496 Polysemy and semantic change After fitting the model in equation (7), we found that the logarithm of the polysemy score exhibits a strong positive effect on rates of semantic change, throughout all four languages (Figure 3b). As with frequency, the relation takes the form of a power law ∆(wi) ∝d(wi)βd, (11) with a language/corpus dependent scaling constant in βd ∈[0.37, 0.77]. Note that this relationship is a complete reversal from what one would expect according to d(wi)’s positive correlation with frequency; i.e., since frequency and polysemy are highly positively correlated, one would expect them to have similar effects on semantic change, but we found that the effect of polysemy completely reversed after controlling for frequency. Figure 2b shows the relationship of polysemy with rates of semantic change in the ENGALL data after regressing out effect of frequency (using the method of Graham, 2003). 5 Discussion We show how distributional methods can reveal statistical laws of semantic change and offer a robust methodology for future work in this area. Our work builds upon a wealth of previous research on quantitative approaches to semantic change, including prior work with distributional methods (Sagi et al., 2011; Wijaya and Yeniterzi, 2011; Gulordava and Baroni, 2011; Jatowt and Duh, 2014; Kulkarni et al., 2014; Xu and Kemp, 2015), as well as recent work on detecting the emergence of novel word senses (Lau et al., 2012; Mitra et al., 2014; Cook et al., 2014; Mitra et al., 2015; Frermann and Lapata, 2016). We extend these lines of work by rigorously comparing different approaches to quantifying semantic change and by using these methods to propose new statistical laws of semantic change. The two statistical laws we propose have strong implications for future work in historical semantics. The law of conformity—frequent words change more slowly—clarifies frequency’s role in semantic change. Future studies of semantic change must account for frequency’s conforming effect: when examining the interaction between some linguistic process and semantic change, the law of conformity should serve as a null model in which the interaction is driven primarily by underlying frequency effects. The law of innovation—polysemous words change more quickly—quantifies the central role polysemy plays in semantic change, an issue that has concerned linguists for more than 100 years (Br´eal, 1897). Previous works argued that semantic change leads to polysemy (Wilkins, 1993; Hopper and Traugott, 2003). However, our results show that polysemous words change faster, which suggests that polysemy may actually lead to semantic change. Overall, these two factors—frequency and polysemy—explain between 48% and 88% of the variance10 in rates of semantic change (across conditions). This remarkable degree of explanatory power indicates that frequency and polysemy are perhaps the two most crucial linguistic factors that explain rates of semantic change over time. These empirical statistical laws also lend themselves to various causal mechanisms. The law of conformity might be a consequence of learning: perhaps people are more likely to use rare words mistakenly in novel ways, a mechanism formalizable by Bayesian models of word learning and corresponding to the biological notion of genetic drift (Reali and Griffiths, 2010). Or perhaps a sociocultural conformity bias makes people less likely to accept novel innovations of common words, a mechanism analogous to the biological process of purifying selection (Boyd and Richerson, 1988; Pagel et al., 2007). Moreover, such mechanisms may also be partially responsible for the law of innovation. Highly polysemous words tend to have more rare senses (Kilgarriff, 2004), and rare senses may be unstable by the law of conformity. While our results cannot confirm such causal links, they nonetheless highlight a new role for frequency and polysemy in language change and the importance of distributional models in historical research. Acknowledgments The authors thank D. Friedman, R. Sosic, C. Manning, V. Prabhakaran, and S. Todd for their helpful comments and discussions. We are also indebted to our anonymous reviewers. W.H. was supported by an NSERC PGS-D grant and the SAP Stanford Graduate Fellowship. W.H., D.J., and J.L. were supported by the Stanford Data Science Initiative, and NSF Awards IIS-1514268, IIS-1149837, and IIS-1159679. 10Marginal R2 (Nakagawa and Schielzeth, 2013). 1497 References James S. Adelman, Gordon D. A. Brown, and Jos´e F. Quesada. 2006. Contextual diversity, not word frequency, determines word-naming and lexical decision times. Psychol. Sci., 17(9):814–823. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python. O’Reilly Media, Inc. Andreas Blank. 1999. Why do new meanings occur? A cognitive typology of the motivations for lexical semantic change. In Peter Koch and Andreas Blank, editors, Historical Semantics and Cognition. Walter de Gruyter, Berlin, Germany. Robert Boyd and Peter J Richerson. 1988. Culture and the Evolutionary Process. University of Chicago Press, Chicago, IL. Elia Bruni, Gemma Boleda, Marco Baroni, and NamKhanh Tran. 2012. Distributional semantics in technicolor. In Proc. ACL, pages 136–145. Michel Br´eal. 1897. Essai de S´emantique: Science des significations. Hachette, Paris, France. John A. Bullinaria and Joseph P. Levy. 2007. Extracting semantic representations from word cooccurrence statistics: A computational study. Behav. Res. Methods, 39(3):510–526. John A. Bullinaria and Joseph P. Levy. 2012. Extracting semantic representations from word cooccurrence statistics: stop-lists, stemming, and SVD. Behav. Res. Methods, 44(3):890–907. J.L. Bybee. 2007. Frequency of Use And the Organization of Language. Oxford University Press, New York City, NY. Paul Cook, Jey Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. Novel Word-sense Identification. In Proc. COLING, pages 1624–1635. Scott Crossley, Tom Salsbury, and Danielle McNamara. 2010. The development of polysemy and frequency use in english second language speakers. Language Learning, 60(3):573–605. Mark Davies. 2010. The Corpus of Historical American English: 400 million words, 1810-2009. http://corpus.byu.edu/coha/. Beate Dorow and Dominic Widdows. 2003. Discovering corpus-specific word senses. In Proc. EACL, pages 79–82. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. Olivier Ferret. 2004. Discovering word senses from a network of lexical cooccurrences. In Proc. COLING, page 1326. J.R. Firth. 1957. A Synopsis of Linguistic Theory, 1930-1955. In Studies in Linguistic Analysis. Special volume of the Philological Society. Basil Blackwell, Oxford, UK. Lea Frermann and Mirella Lapata. 2016. A Bayesian Model of Diachronic Meaning Change. Trans. ACL, 4:31–45. Dirk Geeraerts. 1997. Diachronic Prototype Semantics: A Contribution to Historical Lexicology. Clarendon Press, Oxford, UK. Michael H. Graham. 2003. Confronting multicollinearity in ecological multiple regression. Ecology, 84(11):2809–2815. Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. In Proc. GEMS 2011 Workshop on Geometrical Models of Natural Language Semantics, pages 67–71. Association for Computational Linguistics. Zellig S. Harris. 1954. Distributional structure. Word, 10:146–162. Paul J. Hopper and Elizabeth Closs Traugott. 2003. Grammaticalization. Cambridge University Press, Cambridge, UK. Adam Jatowt and Kevin Duh. 2014. A framework for analyzing semantic change of words across time. In Proc. ACM/IEEE-CS Conf. on Digital Libraries, pages 229–238. IEEE Press. R. Jeffers and Ilse Lehiste. 1979. Principles and Methods for Historical Linguistics. MIT Press, Cambridge, MA. Adam Kilgarriff. 2004. How dominant is the commonest sense of a word? In Text, Speech and Dialogue, pages 103–111. Springer. Yoon Kim, Yi-I. Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. arXiv preprint arXiv:1405.3515. Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2014. Statistically significant detection of linguistic change. In Proc. WWW, pages 625–635. Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychol. Rev., 104(2):211. Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proc. EACL, pages 591–601. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Trans. ACL, 3. 1498 Qi Li and Jeffrey Scott Racine. 2007. Nonparametric econometrics: theory and practice. Princeton University Press, Princeton, NJ. Erez Lieberman, Jean-Baptiste Michel, Joe Jackson, Tina Tang, and Martin A. Nowak. 2007. Quantifying the evolutionary dynamics of language. Nature, 449(7163):713–716. Yuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the google books ngram corpus. In Proc. ACL, System Demonstrations, pages 169–174. Charles E McCulloch and John M Neuhaus. 2001. Generalized linear mixed models. WileyInterscience, Hoboken, NJ. Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, and others. 2011. Quantitative analysis of culture using millions of digitized books. Science, 331(6014):176–182. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Sunny Mitra, Ritwik Mitra, Martin Riedl, Chris Biemann, Animesh Mukherjee, and Pawan Goyal. 2014. That’s sick dude!: Automatic identification of word sense change across different timescales. In Proc. ACL. Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An automatic approach to identify word sense changes in text media across timescales. Natural Language Engineering, 21(05):773–798. Shinichi Nakagawa and Holger Schielzeth. 2013. A general and simple method for obtaining R 2 from generalized linear mixed-effects models. Methods Ecol. Evol., 4(2):133–142. Mark Pagel, Quentin D. Atkinson, and Andrew Meade. 2007. Frequency of word-use predicts rates of lexical evolution throughout Indo-European history. Nature, 449(7163):717–720. Eitan Adam Pechenick, Christopher M. Danforth, and Peter Sheridan Dodds. 2015. Characterizing the Google Books corpus: Strong limits to inferences of socio-cultural and linguistic evolution. PLoS ONE, 10(10). F. Reali and T. L. Griffiths. 2010. Words as alleles: connecting language evolution with Bayesian learners to models of genetic drift. Proc. R. Soc. B, 277(1680):429–436. Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2011. Tracing semantic change with latent semantic analysis. In Kathryn Allan and Justyna A. Robinson, editors, Current Methods in Historical Semantics, page 161. De Gruyter Mouton, Berlin, Germany. Benˆoit Sagot, Lionel Cl´ement, Eric de La Clergerie, and Pierre Boullier. 2006. The Lefff 2 syntactic lexicon for French: architecture, acquisition, use. In Proc. LREC, pages 1–4. Gerold Schneider and Martin Volk. 1998. Adding manual constraints and lexical look-up to a Brilltagger for German. In Proceedings of the ESSLLI98 Workshop on Recent Advances in Corpus Annotation, Saarbr¨ucken. Peter H Sch¨onemann. 1966. A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31(1):1–10. J.S. Seabold and J. Perktold. 2010. Statsmodels: Econometric and statistical modeling with python. In Proc. 9th Python in Science Conference. John Andrew Simpson, Edmund SC Weiner, et al. 1989. The Oxford English Dictionary, volume 2. Clarendon Press Oxford, Oxford, UK. Elizabeth Closs Traugott and Richard B Dasher. 2001. Regularity in Semantic Change. Cambridge University Press, Cambridge, UK. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. J. Artif. Intell. Res., 37(1):141–188. S. Ullmann. 1962. Semantics: An Introduction to the Science of Meaning. Barnes & Noble, New York City, NY. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(2579-2605):85. Duncan J Watts and Steven H Strogatz. 1998. Collective dynamics of ‘small-world’networks. Nature, 393(6684):440–442. Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Understanding semantic change of words over centuries. In Proc. Workshop on Detecting and Exploiting Cultural Diversity on the Social Web, pages 35– 40. ACM. David P Wilkins. 1993. From part to person: Natural tendencies of semantic change and the search for cognates. Cognitive Anthropology Research Group at the Max Planck Institute for Psycholinguistics. B. Winter, Graham Thompson, and Matthias Urban. 2014. Cognitive Factors Motivating The Evolution Of Word Meanings: Evidence From Corpora, Behavioral Data And Encyclopedic Network Structure. In Proc. EVOLANG, pages 353–360. 1499 Yang Xu and Charles Kemp. 2015. A computational evaluation of two laws of semantic change. In Proc. Annual Conf. of the Cognitive Science Society. Yang Xu, Terry Regier, and Barbara C. Malt. 2015. Historical Semantic Chaining and Efficient Communication: The Case of Container Names. Cognitive Science. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural language engineering, 11(02):207–238. George Kingsley Zipf. 1945. The meaning-frequency relationship of words. J. Gen. Psychol., 33(2):251– 256. Bahar ˙Ilgen and Bahar Karaoglan. 2007. Investigation of Zipf’s ‘law-of-meaning’on Turkish corpora. In International Symposium on Computer and Information Sciences, pages 1–6. IEEE. A Hyperparameter and pre-processing details For all datasets, words were lowercased and stripped of punctuation. For the Google datasets we built models using the top-100000 words by their average frequency over the entire historical time-periods, and we used the top-50000 for COHA. During model learning we also discarded all words within a year that occurred below a certain threshold (500 for the Google data, 100 for the COHA data). For all methods, we used the hyperparameters recommended in Levy et al. (2015). For the context word distributions in all methods, we used context distribution smoothing with a smoothing parameter of 0.75. Note that for SGNS this corresponds to smoothing the unigram negative sampling distribution. For both, SGNS and PPMI, we set the negative sample prior α = log(5), while we set this value to α = 0 for SVD, as this improved results. When using SGNS on the Google data, we also subsampled, with words being random removed with probability pr(wi) = 1 − q 10−5 f(wi), as recommended by Levy et al. (2015) and Mikolov et al. (2013). Furthermore, to improve the computational efficiency of SGNS (which works with text streams and not co-occurrence counts), we downsampled the larger years in the Google NGram data to have at most 109 tokens. No such subsampling was performed on the COHA data. For all methods, we defined the context set to simply be the same vocabulary as the target words, as is standard in most word vector applications (Levy et al., 2015). However, we found that the PPMI method benefited substantially from larger contexts (similar results were found in Bullinaria and Levy, 2007), so we did not remove any lowfrequency words per year from the context for that method. The other embedding approaches did not appear to benefit from the inclusion of these lowfrequency terms, so they were dropped for computational efficiency. For SGNS, we used the implementation provided in Levy et al. (2015). The implementations for PPMI and SVD are released with the code package associated with this work. B Visualization algorithm To visualize semantic change for a word wi in two dimensions we employed the following procedure, which relies on the t-SNE embedding method (Van der Maaten and Hinton, 2008) as a subroutine: 1. Find the union of the word wi’s k nearest neighbors over all necessary time-points. 2. Compute the t-SNE embedding of these words on the most recent (i.e., the modern) time-point. 3. For each of the previous time-points, hold all embeddings fixed, except for the target word’s (i.e., the embedding for wi), and optimize a new t-SNE embedding only for the target word. We found that initializing the embedding for the target word to be the centroid of its k′-nearest neighbors in a timepoint was highly effective. Thus, in this procedure the background words are always shown in their “modern” positions, which makes sense given that these are the current meanings of these words. This approximation is necessary, since in reality all words are moving. C Regression analysis details In addition to the pre-processing mentioned in the main text, we also normalized the contextual diversity scores d(wi) within years by subtracting the yearly median. This was necessary because there was substantial changes in the median contextual diversity scores over years due to changes in corpus sample sizes etc. Data points corresponding to words that occurred less than 500 times during a time-period were also discarded, as 1500 these points lack sufficient data to robustly estimate change rates (this threshold only came into effect on the COHA data, however). We removed stop words and proper nouns by (i) removing all stop-words from the available lists in Python’s NLTK package (Bird et al., 2009) and (ii) restricting our analysis to words with part-of-speech (POS) tags corresponding to four main linguistic categories (common nouns, verbs, adverbs, and adjectives), using the POS sources in Table 1. When analyzing the effects of frequency and contextual diversity, the model contained fixed effects for these features and for time along with random effects for word identity. We opted not to control for POS tags in the presented results, as contextual diversity is co-linear with these tags (e.g., adverbs are more contextual diverse than nouns), and the goal was to demonstrate the main effect of contextual diversity across all word types. That said, the effect of contextual diversity remained strong and significantly positive in all datasets even after controlling for POS tags. To fit the linear mixed models, we used the Python statsmodels package with restricted maximum likelihood estimation (REML) (Seabold and Perktold, 2010). All mentioned significance scores were computed according to Wald’s z-tests, though these results agreed with Bonferroni corrected likelihood ratio tests on the eng-all data. The visualizations in Figure 2 were computed on the eng-all data and correspond to bootstrapped locally-linear kernel regressions with bandwidths selected via the AIC Hurvitch criteria (Li and Racine, 2007). 1501
2016
141
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1502–1512, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Beyond Plain Spatial Knowledge: Determining Where Entities Are and Are Not Located, and For How Long Alakananda Vempala and Eduardo Blanco Human Intelligence and Language Technologies Lab University of North Texas Denton, TX, 76203 [email protected], [email protected] Abstract This paper complements semantic role representations with spatial knowledge beyond indicating plain locations. Namely, we extract where entities are (and are not) located, and for how long (seconds, hours, days, etc.). Crowdsourced annotations show that this additional knowledge is intuitive to humans and can be annotated by non-experts. Experimental results show that the task can be automated. 1 Introduction Extracting meaning from text is crucial for true text understanding and an important component of several natural language processing systems. Among many others, previous efforts have focused on extracting causal relations (Bethard and Martin, 2008), semantic relations between nominals (Hendrickx et al., 2010), spatial relations (Kordjamshidi et al., 2011) and temporal relations (Pustejovsky et al., 2003; Chambers et al., 2014). In terms of corpora development and automated approaches, semantic roles are one of the most studied semantic representations (Toutanova et al., 2005; M`arquez et al., 2008). They have been proven useful for, among others, coreference resolution (Ponzetto and Strube, 2006) and question answering (Shen and Lapata, 2007). While semantic roles provide a useful semantic layer, they capture a portion of the meaning encoded in all but the simplest statements. Consider the sentence in Figure 1 and the semantic roles of drove (solid arrows). In addition to these roles, humans intuitively understand that (dashed arrow) (1) John was not located in Berlin before or during drove, (2) he was located in Berlin after drove for a short period of time (presumably, until he was done picking up the package, i.e., for a few minutes to John drove AGENT DESTINATION PURPOSE to Berlin to pick up a package Figure 1: Semantic roles (solid arrows) and additional spatial knowledge (dashed arrow). an hour), and then left Berlin and thus (3) was not located there anymore. Some of this additional spatial knowledge is inherent to the motion verb drive: people cannot drive to the location where they are currently located, and they will be located at the destination of driving after driving takes place. But determining for how long the agent of drive remains at the destination depends on the arguments of drive: from [John]AGENT [drove]v [home]DESTINATION [after an exhausting work day]TIME, it is reasonable to believe that John will be located at home overnight. This paper manipulates semantic roles in order to extract temporally-anchored spatial knowledge. We extract where entities are and are not located, and temporally anchor this information. Temporal anchors indicate for how long something is (or is not) located somewhere, e.g., for 5 minutes before (or after) an event. We target additional spatial knowledge not only between arguments of motion verbs as exemplified above, but also between intra-sentential arguments of any verb. The main contributions are: (1) crowdsourced annotations on top of OntoNotes1 indicating where something is and is not located (polarity), and for how long (temporal anchors); (2) detailed annotation analysis using coarse- and fine-grained labels (yes / no vs. seconds, minutes, years, etc.); and (3) experiments detailing results with several feature combinations, and using gold-standard and predicted linguistic information. 1Available at http://www.cse.unt.edu/˜blanco/ 1502 2 Definitions and Background We use R(x, y) to denote a semantic relationship R between x and y. R(x, y) can be read “x has R y”, e.g., AGENT(drove, John) can be read “drove has AGENT John.” By definition, semantic roles are semantic relationships between predicates and their arguments—for all semantic roles R(x, y), x is a predicate and y is an argument of x. Generally speaking, semantic roles capture who did what to whom, how, when and where. We use the term additional spatial knowledge to refer to spatial knowledge not captured with semantic roles, i.e., spatial meaning between x and y where (1) x is not a predicate or (2) x is a predicate and y is not an argument of x. As we shall see, we go beyond extracting “x has LOCATION y” with plain LOCATION(x, y) relations. We extract where entities are and are not located, and for how long they are located (and not located) somewhere. 2.1 Semantic Roles in OntoNotes OntoNotes (Hovy et al., 2006) is large corpus (≈64K sentences) that includes verbal semantic role annotations, i.e., the first argument x of any role R(x, y) is a verb.2 OntoNotes semantic roles follow PropBank framesets (Palmer et al., 2005). It uses a set of numbered arguments (ARG0–ARG5) whose meanings are verb-dependent, e.g., ARG2 is used for “employer” with verb work.01 and “expected terminus of sleep” with verb sleep.01. Additionally, it uses argument modifiers which share a common meaning across verbs (ARGMLOC, ARGM-TMP, ARGM-PRP, ARGM-CAU, etc.). For a detailed description of OntoNotes semantic roles, we refer the reader to the LDC catalog3 and PropBank (Palmer et al., 2005). To improve readability, we often rename numbered arguments, e.g., AGENT instead of ARG0 in Figure 1. 3 Related Work Approaches to extract PropBank-style semantic roles have been studied for years (Carreras and M`arquez, 2005), state-of-the-art tools sobtain Fmeasures of 83.5 (Lewis et al., 2015). In this paper, we complement semantic role representations with temporally-anchored spatial knowledge. Extracting additional meaning on top of popular corpora is by no means a new problem. Ger2We use the CoNLL 2011 Shared Task release (Pradhan et al., 2011), http://conll.cemantix.org/2011/ 3https://catalog.ldc.upenn.edu/LDC2013T19 ber and Chai (2010) augmented NomBank (Meyers et al., 2004) annotations with additional numbered arguments appearing in the same or previous sentences, and Laparra and Rigau (2013) presented an improved algorithm for the same task. The SemEval-2010 Task 10 (Ruppenhofer et al., 2009) targeted cross-sentence missing arguments in FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2005). Silberer and Frank (2012) casted the SemEval task as an anaphora resolution task. We have previously proposed an unsupervised framework to compose semantic relations out of previously extracted relations (Blanco and Moldovan, 2011), and a supervised approach to infer additional argument modifiers (ARGM) for verbs in PropBank (Blanco and Moldovan, 2014). Unlike the current work, these previous efforts (1) improve the semantic representation of verbal and nominal predicates, or (2) infer relations between arguments of the same predicate. More recently, we showed that spatial relations can be inferred from PropBank-style semantic roles (Blanco and Vempala, 2015; Vempala and Blanco, 2016). In this paper, we expand on this idea as follows. First, we not only extract whether “x has LOCATION y” before, during or after an event, but also specify for how long before and after (seconds, minutes, hours, days, weeks, months, years, etc.). Second, we release crowdsourced annotations for 1,732 potential additional spatial relations. Third, we experiment with both gold and predicted linguistic information. Spatial semantics has received considerable attention in the last decade. The task of spatial role labeling (Kordjamshidi et al., 2011; Kolomiyets et al., 2013) aims at representing spatial information with so-called spatial roles, e.g., trajector, landmark, spatial and motion indicators, etc. Unlike us, spatial role labeling does not aim at extracting where entities are not located or temporally-anchored spatial information. But doing so is intuitive to humans, as the examples and crowdsourced annotations in this paper show. Spatial knowledge is intuitively associated with motion events, e.g., drive, go, fly, walk, run. Hwang and Palmer (2015) presented a classifier to detect caused motion constructions triggered by non-motion verbs, e.g., The crowd laughed the clown off the stage (i.e., the crowd made the clown leave the stage). Our work does not target motion verbs or motion constructions, 1503 as the examples in Table 3 show, non-motion constructions triggered by non-motion verbs also allow us to infer temporally-anchored spatial meaning, e.g., played, honored, taught, fighting. 4 Corpus Creation and Analysis Our goal is to complement semantic role representations with additional spatial knowledge. Specifically, our goal is to infer temporally-anchored spatial knowledge between x and y, where semantic roles ARGi(xverb, x) and ARGM-LOC(yverb, y) exists in the same sentence. In order to achieve this goal, we follow a two-step methodology. First, we automatically generate potential additional spatial knowledge by combining selected semantic roles. Second, we crowdsource annotations, including polarity and temporal anchors, to validate or discard the potential additional knowledge. 4.1 Generating Potential Spatial Knowledge We generate potential additional relations LOCATION(x, y) by combining all ARGi(xverb, x) and ARGM-LOC(yverb, y) semantic roles within a sentence (xverb and yverb need not be the same). Then, we enforce the following restrictions: 1. x and y must not overlap; 2. the head of x must be a named entity person, org, work of art, fac, norp, product or event; 3. the head of y must be a noun subsumed by physical entity.n.01 in WordNet, or a named entity fac, gpe, loc, or org;4 and 4. the heads of x and y must be different than the heads of all previously generated pairs. These restrictions were designed after manual analysis of randomly selected combinations of ARGi and ARGM-LOC semantic roles with two goals in mind: to (1) reduce the annotation effort and (2) generate the least amount of invalid potential additional spatial knowledge without arbitrarily discarding any predicates (e.g., focus only on motion verbs). Additional relations not satisfying restriction 1 are nonsensical, and restriction 4 simply discards potential additional relations that have already been generated. Restrictions 2 and 3 are designed to improve the likelihood that the potential additional spatial knowledge will not be 4For a description and examples of these named entity types, refer to (Weischedel and Brunstein, 2005). discarded when crowdsourcing annotations, e.g., locations whose head is an adverb such as here and there (11% of all ARGM-LOC roles) do not yield valid additional spatial knowledge. OntoNotes annotates 9,612 ARGM-LOC semantic roles, and the number of potential LOCATION relations generated is 1,732. Thus, our methodology aims at adding 18% of additional spatial relations on top of OntoNotes semantic roles. If we consider each temporal anchor as a different spatial relation, we aim at adding 54% additional spatial relations. As we shall see, over 69% of the additional potential relations are valid (Section 4.3). 4.2 Crowdsourcing Spatial Knowledge Once potential spatial knowledge is generated, it must be validated or discarded. We are interested in additional spatial knowledge as intuitively understood by humans, so we avoid lengthy annotation guidelines and ask simple questions to nonexperts via Amazon Mechanical Turk. After in-house pilot annotations, it became clear that asking “Is x located in/at y” for each potential LOCATION(x, y) and forcing annotators to answer yes or no is suboptimal. For example, consider again Figure 1 and question “Is John located in Berlin?”. An unabridged natural answer would be “not before or during drove, but certainly after drove for a few minutes until he was done picking up the package.” In other words, it is intuitive to consider polarity (whether x is or is not located at y) and temporal anchors (for how long?). We designed the interface in Figure 2 to gather annotations including polarity and temporal anchors, and accounting for granularity levels. Answers map to the following coarse-grained labels: • Before and after: yes, no, unk and inv. • During: yes (first 2 options), no, unk and inv. Label unk stands for unknown and inv for invalid. Furthermore, yes maps to these finegrained labels indicating specific periods of time: • Before and after: an integer and a unit of time (secs, mins, hours, days, weeks, months or years)5, or inf for infinity. • During: entire or some. 5The interface restricts the range of valid integers, e.g., numbers selectable with secs range from 1 to 59. 1504 Figure 2: Amazon Mechanical Turk interface to collect temporally-anchored spatial annotations. Annotators were also provided with a description and examples of all answers (not shown). secs mins hours days weeks months years inf entire some Before 0.20 7.55 11.33 7.36 3.78 8.15 46.72 14.91 n/a n/a During n/a n/a n/a n/a n/a n/a n/a n/a 97.77 2.23 After 0.50 6.48 11.29 6.48 3.34 6.29 29.47 36.15 n/a n/a Table 1: Percentage of fine-grained labels for instances annotated with coarse-grained label yes. Figure 3: Percentage of coarse-grained labels per temporal anchor. Total number of annotations is 1,732 × 3 = 5,196. We created one Human Intelligence Task (HIT) per potential LOCATION(x, y), and recruited annotators with previous approval rate ≥95% and 5,000 or more previous approved HITs. A total of 74 annotators participated in the task, on average, they annotated 163.24 HITs (maximum: 1,547, minimum: 1). We rejected submissions that took unusually short time compared to other submissions, and those from annotators who always chose the same label. Overall, we only rejected 1.2% of submissions. We collected 7 annotations per HIT and paid $0.05 per HIT. 4.3 Annotation Analysis Figure 3 shows the percentage of coarse-grained labels per temporal anchor. Labels yes and no combined account for 67.7% of labels (before), 77.4% (during) and 63.1% (after). Note that both yes and no yield valid additional spatial knowledge: whether x is (or is not) located at y. Annotators could not commit to yes or no in 16.1% of questions on average (unk), with a much smaller percentage for during temporal anchor (5.8%; before: 18.7%, after: 23.7%). This is not surprising, as arguments of some verbs, e.g., AGENT of play, must be located at the location of the event during the event, but not necessarily before or after. Finally, inv only accounts for 14.6% of labels (before: 13.7%, during: 16.9%, after: 13.2%), thus most potential additional knowledge automatically generated (Section 4.1) can be understood. Percentages of fine-grained labels per temporal span, i.e., refinements of yes coarse-grained labels, are shown in Table 1. The vast majority of times (97.77%) annotators believe an entity is at a location during an event, the entity is there for the entire duration of the event (entire). Annotators barely used label secs (before: 0.20% and after: 0.50%), but percentages range between 3.34% and 46.72% for other units of time (uniform distribution would be 1/8 = 12.5%). Labels years and inf, which indicate that an entity is located somewhere for years or indefinitely before (or after) an event, are the most common fine-grained labels for before and after (14.91–46.72%). 4.3.1 Annotation Quality Table 2 presents agreement measures. Pearson correlations are the weighted averages between each annotator and the majority label and are calculated following this mapping: (coarse labels): yes: 1, unk/inv: 0, no: −1; (fine labels): before/after: secs: 1, mins: 1 + 1/7, hours: 1505 Coarse-grained labels Fine-grained labels Pearson % instances s.t. a annotators agree Pearson % instances s.t. a annotators agree a = 7 a ≥6 a ≥5 a ≥4 a ≥3 a = 7 a ≥6 a ≥5 a ≥4 a ≥3 Before 0.73 0.9 8.0 30.0 65.8 97.2 0.67 0.8 6.0 21.6 49.3 85.2 During 0.81 8.9 39.9 59.1 81.4 98.3 0.79 2.1 19.5 45.8 71.8 94.1 After 0.66 0.7 6.0 27.0 62.9 96.6 0.62 0.5 4.6 19.8 49.9 87.1 All 0.67 3.5 18.0 38.8 70.0 97.4 0.64 1.1 10.0 29.0 57.0 88.8 Table 2: Weighed Pearson correlations between annotators and the majority label, and percentage of instances for which at least 7, 6, 5, 4 and 3 annotators (out of 7) agree. Before During After Statement C F C F C F Statement 1: [. . . ] [Hsia]ARG0, v1,v2 [stopped]v1 off [in Milan]ARGM-LOC,v1 [to [visit]v2 [Hsiao Chin]ARG1,v2]ARGM-PRP,v1 . x: Hsia, y: Milan, yverb: stopped yes mins yes entire yes hours x: Hsiao Chin, y: Milan, yverb: stopped yes years yes entire yes years Statement 2: [President Clinton]ARG0,v1 [played]v1 [a supporting role]ARG1,v1 [today]ARGM-TMP,v1 [in [New York City where]ARGM-LOC,v2 [the first lady, Senator Clinton]ARG1,v2, was [honored]v2 [at Madison Square Garden]ARGM-LOC,v2 ]ARGM-LOC,v1. x: (President) Clinton, y: New York City, yverb: played yes hours yes entire yes hours x: (President) Clinton, y: Madison Square Garden, yverb: honored yes mins yes entire yes mins x: (Senator) Clinton, y: New York City, yverb: played yes hours yes entire yes hours x: (Senator) Clinton, y: Madison Square Garden, yverb: honored yes mins yes entire yes mins Statement 3: [Before [joining]v2 [Maidenform]ARG1,v2 [in 1972]ARGM-TMP, V2]ARGM-TMP, v1, [[Mr. Brawer, who]ARG0,v3 [holds]v3 [a doctoral degree in English]ARG1,v3]ARG0, v1,v2, [taught]v1 [at the University of Wisconsin]ARGM-LOC, v1. x: Maidenform, y: University of Wisconsin , yverb: taught no n/a no n/a no n/a x: Mr. Brawer, y: University of Wisconsin, yverb: taught no n/a yes entire no n/a Statement 4: [...] [George Koskotas, self-confessed embezzler]ARG0, v1, [now]ARGM-TMP,v1 [residing]v1 [in [a jail cell in Salem, Mass., from where]ARGM-LOC, v2 [he]ARG0, v2 is [fighting]v2 [extradition proceedings]ARG1,v2]ARG1,v1. x: George Koskotas, y: a jail cell in Salem, Mass., yverb: fighting yes months yes entire unk n/a Table 3: Annotation examples. For each statement, we indicate semantic roles with square brackets, all potential additional spatial knowledge (is x located at y?), and annotations with respect to yverb (coarse(C) and fine-grained (F) labels per temporal anchor: before, during and after). 1 + 2/7, days: 1 + 3/7, weeks: 1 + 4/7, months: 1 + 5/7, years: 1 + 6/7, inf: 2; during: some: 1 entire:2. Calculating the weighted average of individual Pearson correlations allows us to take into account the number of questions answered by each annotator. Correlations range between 0.66 and 0.81 with coarse-grained labels, and are slightly lower with fine-grained labels (0.67 vs. 0.73, 0.79 vs. 0.81, and 0.62 vs. 0.66). Questions for during temporal anchor are easier to answer with both kinds of labels (coarse: 0.81, fine: 0.79). Table 2 also shows how many annotators (out of 7) chose the same label (exact match). At least 4 annotators agreed with coarse-grained labels in most instances (70%), and at least 3 annotators agreed virtually always (97.4%). Percentages are lower with fine-grained labels: 57.0% and 88.8%. 4.4 Annotation Examples Table 3 presents several annotation examples. We include all potential additional spatial knowledge (Section 4.1) and annotations per temporal anchor. Two additional LOCATION(x, y) can be inferred from Statement (1): whether Hsia and Hisao Chin are located in Milan before, during and after stopped. Annotators understood that Hsia was in Milan temporarily: for a few minutes before stopped, during the full duration of stopped and for a few hours after stopped. In other words, Hsia was elsewhere, then went to Milan and left after visiting with Hsiao for a few hours. Regarding Hsiao, annotators interpreted that Milan is her permanent location: for years before and after Hsia stopped to visit her. While somehow ambiguous, these annotations are reasonably intuitive. Statement (2) has 2 ARGM-LOC roles and 4 potential additional relations. Annotations for during are straightforward: both President Clinton and Senator Clinton were located in New York City during played and at Madison Square Garden during honored. Annotations for before and after are more challenging: both Clintons where located in New York City for hours (not days) before and after played, but at Madison Square Garden for a few minutes (not hours) before and after honored. 1506 Feature Description basic 1–4 xverb, yverb and their part-of-speech tags lexical 5–12 first and last words of x and y, and their part-of-speech tags 13 whether x occurs before or after y heads 14–17 heads of x and y, and their part-of-speech tags 18–19 named entity types of the heads of x and y semantic 20 semantic role label linking xverb and x 21–24 number of ARGM-TMP and ARGM-LOC roles in xroles and yroles 25–26 number of ARGM-TMP and ARGM-LOC roles in the sentence to which x and y belong 27 whether xverb and yverb are the same verb Table 4: Feature set to determine whether x is (or is not) located at y, and for how long. xverb (yverb) denote the verbs to which x (y) attach, and xroles (yroles) denote the semantic roles of xverb (yverb). In other words, they arrived to Madison Square Garden shortly before honored and left shortly after, but stayed in New York City for some hours. Statement (3) exemplifies no label. Potential additional spatial knowledge includes whether Maidenform is located at University of Wisconsin, which is never true (no). Additionally, University of Wisconsin was a location of Mr. Brawer while he taught there (during), but nor before or after. Statement (4) exemplifies contrastive coarsegrained labels and unk label. Annotators interpreted that George Koskotas was in the jail cell for months before and during fighting extradition, and that it is unknown (unk) after fighting because the outcome of the fight is unknown. 5 Inferring Temporally-Anchored Spatial Knowledge We follow a standard machine learning approach, and use the training, development and test sets released by the organizers of the CoNLL-2011 Shared Task (Pradhan et al., 2011). We first generate additional spatial knowledge deterministically as described in Section 4.1. Then, for each additional LOCATION(x, y), we generate one instance per temporal anchor and discard those annotated inv. The total number of instances is 1,732 × 3 −754 = 4,442. We trained SVM models with RBF kernel using scikit-learn (Pedregosa et al., 2011). The feature set and SVM parameters were tuned using 10-fold cross-validation with the train and development sets, and results are calculated using the test set. During the tuning process, we discovered that it is beneficial to train one SVM per temporal anchor instead of a single model for the 3 temporal anchors. 5.1 Feature Selection We use a mixture of standard features from semantic role labeling, and semantic features designed for extracting temporally-anchored spatial knowledge from semantic roles. In order to determine whether x is (or is not) located at y and for how long, we extract features from x and y, the verbs to which they attach (xverb and yverb) and all semantic roles of xverb and yverb (xroles and yroles). Basic, lexical and heads features are standard in role labeling (Gildea and Jurafsky, 2002). Basic features are the word form and part-of-speech of xverb and yverb. Lexical features capture the first and last words of x and y and their part-of-speech tags, as well as a binary flag indicating whether x occurs before or after y. Heads features capture the heads of x and y and their part-of-speech tags, as well as their named entity types, if any. Semantic features include features 20–27. Feature 20 indicates the semantic role linking x and xverb (ARG0, ARG1, ARG2, etc.); recall that the semantic role between y and yverb is always ARGMLOC (Section 4.1). Features 21–24 are counts of ARGM-TMP and ARGM-LOC semantic roles in the verb-argument structures to which x and y attach. Features 25–26 are the same counts of roles, but taking into account all the roles in the sentence to which x and y belong. Finally, feature 27 signals whether x and y attach to the same verb. We tried many other features, including counts of all roles, heads of all semantic roles present, semantic role ordering, VerbNet (Schuler, 2005) and Levin (Levin, 1993) verb classes, and WordNet hypernyms (Miller, 1995), but they did not yield any improvements during the tuning process. We exemplify features with pair (x: George Koskotas, self-confessed embezzler, y: a jail cell in [...], from where) from Statement 4 in Table 3: • Basic: features 1–4: {residing, VBG, fighting, VBG}. • Lexical: feature 5–12: {George, NNP, Koskotas, NNP, a, DT, where, WRB}, features 13: {before}. 1507 Before During After All P R F P R F P R F P R F baseline yes 0.00 0.00 0.00 0.77 1.00 0.87 0.00 0.00 0.00 0.77 0.55 0.64 no 0.49 1.00 0.65 0.00 0.00 0.00 0.40 1.00 0.57 0.44 0.86 0.58 unk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Weighted avg. 0.24 0.49 0.32 0.60 0.77 0.67 0.16 0.40 0.23 0.51 0.55 0.50 basic yes 0.48 0.34 0.40 0.82 0.94 0.88 0.53 0.50 0.52 0.71 0.71 0.71 no 0.52 0.63 0.57 0.62 0.36 0.46 0.47 0.49 0.48 0.51 0.54 0.52 unk 0.23 0.21 0.22 0.33 0.10 0.15 0.29 0.29 0.29 0.26 0.23 0.25 Weighted avg. 0.44 0.45 0.44 0.76 0.79 0.76 0.44 0.44 0.44 0.55 0.56 0.56 basic + lexical + heads yes 0.68 0.37 0.48 0.83 0.92 0.87 0.73 0.38 0.50 0.79 0.67 0.73 no 0.59 0.80 0.68 0.47 0.32 0.38 0.53 0.66 0.59 0.56 0.68 0.61 unk 0.36 0.29 0.32 0.20 0.10 0.13 0.32 0.39 0.35 0.33 0.32 0.32 Weighted avg. 0.56 0.56 0.54 0.73 0.77 0.74 0.54 0.50 0.50 0.62 0.61 0.61 basic + lexical + heads + semantics yes 0.86 0.44 0.58 0.85 0.94 0.90 0.79 0.38 0.51 0.84 0.70 0.77 no 0.63 0.80 0.71 0.56 0.47 0.50 0.55 0.69 0.61 0.59 0.71 0.64 unk 0.37 0.38 0.38 0.50 0.10 0.17 0.33 0.42 0.37 0.35 0.37 0.36 Weighted avg. 0.64 0.60 0.60 0.78 0.81 0.78 0.57 0.52 0.52 0.66 0.64 0.65 Table 5: Results obtained with gold-standard linguistic annotations and coarse-grained labels using the baseline and several feature combinations (basic, lexical, heads and semantic features). • Head: features 14–17: {Koskotas, NNP, cell, NN}, features 18–19: {person, none}, • Semantic feature 20: {ARG0}, features 21– 24: {1, 0, 0, 1}, feature 25–26: {1, 1}, feature 27: {no}. 6 Experiments and Results We present results using gold-standard (Section 6.1) and predicted (Section 6.2) linguistic annotations. POS tags, parse trees, named entities and semantic roles are taken directly from gold or auto files in the CoNLL-2011 Shared Task release. 6.1 Gold-Standard Linguistic Annotations Using gold-standard linguistic annotations has two advantages. First, because we have gold semantic roles and named entities, we generate the same potential additional spatial knowledge generated while creating our annotations (Section 4.1). Second, feature values are guaranteed to be correct. 6.1.1 Predicting Coarse-Grained Labels Table 5 presents results with coarse-grained labels using a baseline and learning with several combinations of features extracted from gold-standard linguistic annotations (POS tags, parse trees, semantic roles, etc.). The baseline predicts the most frequent label per temporal anchor, i.e., yes for during, and no for before and after (Figure 3). Best results for all labels and temporal anchors are obtained with all features (basic, lexical, heads and semantics). Overall F-measure is 0.65, and during instances obtain higher F-measure (0.78) than before (0.60) and after (0.52). Regarding labels, yes obtains best results (overall 0.77), followed by no (0.64) and unk (0.36). Not surprisingly, the most frequent label per temporal anchor obtains the best results with all features (before: no, 0.71; during: yes, 0.90; after: no, 0.61). Before and after instances benefit the most from learning with all features with respect to the baseline (before: 0.32 vs. 0.60, after: 0.23 vs. 0.52). While during instances also benefit, the difference in F-measure is lower (0.67 vs. 0.78). Feature Ablation. The bottom 3 blocks in Table 5 present results using several feature types incrementally. Basic features yield an overall Fmeasure of 0.56, and surprisingly good results for during instances (0.76). Indeed, the best performance obtained with during instances is 0.78 (all features), suggesting that the verbs to which x and y attach are very strong features. Lexical and heads features are most useful for before (0.44 vs. 0.54, +22.7%) and after (0.44 vs. 0.50, +13.6%) instances, and are actually detrimental for during instances (0.76 vs. 0.74, -2.6%). Including semantic features, however, improves results with respect to basic features for all temporal anchors: before: 0.44 vs. 0.60, 36.4% during: 0.76 vs. 0.78, 2.6% after: 0.44 vs. 0.52, 18.2%. Differences in overall F-measure are not statistically significant between basic and basic + lexical + heads (0.56 vs. 0.61, Z-test, two-tailed, p-value = 0.05), but the difference including semantic features is significant (0.50 vs. 0.65, Z-test, two-tailed, p-value = 0.009). 1508 Before During After All P R F P R F P R F P R F baseline spurious 0.50 1.00 0.66 0.50 1.00 0.66 0.50 1.00 0.66 0.50 1.00 0.66 other 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Weighted avg. 0.25 0.50 0.33 0.25 0.50 0.33 0.25 0.50 0.33 0.25 0.50 0.33 basic yes 0.00 0.00 0.00 0.47 0.56 0.51 0.00 0.00 0.00 0.52 0.36 0.43 no 0.55 0.33 0.42 0.40 0.20 0.27 0.43 0.38 0.41 0.43 0.32 0.37 unk 0.26 0.30 0.28 0.11 0.07 0.08 0.37 0.25 0.30 0.24 0.18 0.21 spurious 0.68 0.91 0.78 0.68 0.71 0.70 0.67 0.93 0.78 0.69 0.88 0.77 Weighted avg. 0.51 0.58 0.53 0.53 0.55 0.54 0.49 0.58 0.52 0.54 0.58 0.55 basic + lexical + heads + semantics yes 1.00 0.07 0.13 0.74 0.87 0.80 0.00 0.00 0.00 0.74 0.56 0.64 no 0.64 0.48 0.55 0.67 0.20 0.31 0.43 0.46 0.44 0.53 0.49 0.51 unk 0.41 0.78 0.54 0.50 0.47 0.48 0.41 0.61 0.49 0.51 0.68 0.58 spurious 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Weighted avg. 0.82 0.75 0.73 0.84 0.84 0.83 0.66 0.71 0.68 0.80 0.79 0.79 Table 7: Results obtained with predicted linguistic annotations and coarse-grained labels. spurious is a new label indicating overgenerated pairs not present in the gold standard. Label P R F Before mins 1.00 0.67 0.80 days 1.00 0.50 0.67 years 0.15 0.17 0.16 inf 1.00 0.33 0.50 no 0.63 0.80 0.71 unk 0.37 0.38 0.38 other 0.00 0.00 0.00 Weighted avg. 0.52 0.54 0.51 During entire 0.84 0.94 0.88 some 0.00 0.00 0.00 no 0.56 0.45 0.50 unk 0.50 0.10 0.17 Weighted avg. 0.77 0.80 0.77 After years 0.27 0.25 0.26 inf 0.56 0.24 0.33 no 0.55 0.69 0.61 unk 0.33 0.42 0.37 other 0.00 0.00 0.00 Weighted avg. 0.41 0.44 0.41 All mins 0.50 0.50 0.50 days 1.00 0.29 0.44 years 0.21 0.21 0.21 inf 0.67 0.27 0.38 entire 0.84 0.94 0.89 no 0.59 0.71 0.64 unk 0.35 0.37 0.36 other 0.00 0.00 0.00 Weighted avg. 0.56 0.59 0.57 Table 6: Results obtained with gold linguistic annotations and fine-grained labels using all features. 6.1.2 Predicting Fine-Grained Labels Table 6 presents results using fine-grained labels and all features. Overall F-measure is lower than with coarse-grained labels (0.57 vs. 0.65). Results for during instances barely decreases (0.78 vs. 0.77) because almost 98% of fine-grained labels are entire (Table 1). Most fine-grained labels for before and after are infrequent (Table 1), our best model is unable to predict labels secs, hours, weeks and months for before, and secs, mins, hours, days, weeks and months for after (other rows). But these labels account for relatively few instances: individually, between 0.2% and 11.33%, and among all of them, 23.46% for before and 34.38% for after instances. It is worth noting that mins, days and inf obtain relatively high F-measures for before: 0.80, 0.67 and 0.50 respectively. In other words, we can distinguish whether an entity is somewhere only for a few minutes or days (but not longer) before an event, or at all times before an event. 6.2 Predicted Linguistic Annotations In order to make an honest evaluation in a realistic environment, we also experiment with predicted linguistic annotations. The major disadvantage of doing so is that predicted semantic roles and named entities are often incorrect or missing, thus we generate spurious additional spatial knowledge and miss some additional spatial knowledge because the potential relation cannot be generated. Table 7 presents results using predicted linguistic annotations. The additional label spurious is used for instances generated from incorrect semantic roles or named entities, as these instances do not appear in the crowdsourced annotations (Section 4). Due to space constraints, we only present results using coarse-grained labels, but provide results per temporal anchor. The baseline, which predicts the most likely label per temporal anchor, always predicts spurious since 50% of generated additional potential knowledge does not appear in the crowdsourced annotations. Using all features clearly outperforms basic features (overall F-measure: 1509 0.79 vs 0.55), thus we focus on the former. Using all features, spurious is always predicted correctly. While useful to discard additional spatial knowledge that should not have been generated, spurious does not allow us to make meaningful inferences. The labels that we are most interested in, yes and no, obtain overall Fmeasures of 0.64 and 0.51 (compared to 0.77 and .64 with gold linguistic annotations). Regarding labels, yes can only be reliably predicted for during instances (F-measure: 0.80), and no is predicted with modest F-measures for all temporal anchors: before: 0.55, during: 0.31, after: 0.44. 7 Conclusions This paper demonstrates that semantic roles are a reliable semantic layer from which one can infer whether entities are located or not located somewhere, and for how long (seconds, minutes, days, years, etc.). Crowdsourced annotations show that this kind of inferences are intuitive to humans. Moreover, most potential additional spatial knowledge generated following a few simple deterministic rules was validated by annotators (yes and no; before: 67.7%, during: 77.4%, after: 63.1%). Experimental results with gold-standard semantic roles and named entities show that inference can be done with standard supervised machine learning (overall F-measure: 0.65, yes: 0.77, no: 0.64). Using predicted linguistic information, results decrease substantially (yes: 0.64, no: 0.51). This is mostly due to the fact that predicted semantic roles and named entities are often wrong or missing, and this fact unequivocally makes the inference process more challenging. We believe that combining semantic roles and other semantic representation in a similar fashion to the one used in this paper could be useful to infer knowledge beyond spatial inferences. For example, one could infer who is in POSSESSION of something over time by manipulating the events in which the object in question participates in. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 17th international conference on Computational Linguistics, Montreal, Canada. Steven Bethard and James H. Martin. 2008. Learning semantic links from a corpus of parallel temporal and causal relations. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, HLT-Short ’08, pages 177–180, Stroudsburg, PA, USA. Association for Computational Linguistics. Eduardo Blanco and Dan Moldovan. 2011. Unsupervised learning of semantic relation composition. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011), pages 1456–1465, Portland, Oregon. Eduardo Blanco and Dan Moldovan. 2014. Leveraging verb-argument structures to infer semantic relations. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 145–154, Gothenburg, Sweden, April. Association for Computational Linguistics. Eduardo Blanco and Alakananda Vempala. 2015. Inferring temporally-anchored spatial knowledge from semantic roles. In Proceedings of the 2015 Annual Conference of the North American Chapter of the ACL, pages 452–461. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: semantic role labeling. In CONLL ’05: Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 152–164, Morristown, NJ, USA. Association for Computational Linguistics. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics, 2:273– 284. Matthew Gerber and Joyce Chai. 2010. Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1583–1592, Uppsala, Sweden, July. Association for Computational Linguistics. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist., 28(3):245–288, September. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Sebastian ´O S´eaghdha, Diarmuid andPad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38, Uppsala, Sweden, July. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: the 90% Solution. In NAACL ’06: Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers on XX, pages 57–60, Morristown, NJ, USA. Association for Computational Linguistics. 1510 Jena D. Hwang and Martha Palmer. 2015. Identification of caused motion construction. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 51–60, Denver, Colorado, June. Association for Computational Linguistics. Oleksandr Kolomiyets, Parisa Kordjamshidi, MarieFrancine Moens, and Steven Bethard. 2013. Semeval-2013 task 3: Spatial role labeling. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 255–262. Association for Computational Linguistics. Parisa Kordjamshidi, Martijn Van Otterlo, and MarieFrancine Moens. 2011. Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Trans. Speech Lang. Process., 8(3):4:1–4:36, December. Egoitz Laparra and German Rigau. 2013. ImpAr: A Deterministic Algorithm for Implicit Semantic Role Labelling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1180–1189, Sofia, Bulgaria, August. Association for Computational Linguistics. Beth Levin. 1993. English verb classes and alternations : a preliminary investigation. Mike Lewis, Luheng He, and Luke Zettlemoyer. 2015. Joint a* ccg parsing and semantic role labelling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1444–1454, Lisbon, Portugal, September. Association for Computational Linguistics. Llu´ıs M`arquez, Xavier Carreras, Kenneth C Litkowski, and Suzanne Stevenson. 2008. Semantic role labeling: an introduction to the special issue. Computational linguistics, 34(2):145–159. Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotating noun argument structure for nombank. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC-2004). George A. Miller. 1995. Wordnet: A lexical database for english. In Communications of the ACM, volume 38, pages 39–41. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL ’06, pages 192–199, Stroudsburg, PA, USA. Association for Computational Linguistics. Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. Conll-2011 shared task: Modeling unrestricted coreference in ontonotes. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–27, Portland, Oregon, USA, June. Association for Computational Linguistics. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The timebank corpus. In Corpus linguistics, volume 2003, page 40. Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2009. SemEval-2010 Task 10: Linking Events and Their Participants in Discourse. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 106–111, Boulder, Colorado, June. Association for Computational Linguistics. Karin Kipper Schuler. 2005. Verbnet: A Broadcoverage, Comprehensive Verb Lexicon. Ph.D. thesis, Philadelphia, PA, USA. AAI3179808. Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 12–21, Prague, Czech Republic, June. Association for Computational Linguistics. Carina Silberer and Anette Frank. 2012. Casting implicit role linking as an anaphora resolution task. In Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval ’12, pages 1–10, Stroudsburg, PA, USA. Association for Computational Linguistics. Kristina Toutanova, Aria Haghighi, and Christopher D Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 589–596. Association for Computational Linguistics. Alakananda Vempala and Eduardo Blanco. 2016. Complementing semantic roles with temporally an1511 chored spatial knowledge: Crowdsourced annotations and experiments. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2652–2658. Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Technical report, Linguistic Data Consortium, Philadelphia. 1512
2016
142
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1513–1524, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics LEXSEMTM: A Semantic Dataset Based on All-words Unsupervised Sense Distribution Learning Andrew Bennett,♥Timothy Baldwin,♥Jey Han Lau,♥♦ Diana McCarthy,♣and Francis Bond♠ ♥Dept of Computing and Information Systems, The University of Melbourne ♦IBM Research ♣Dept of Theoretical and Applied Linguistics, University of Cambridge ♠Linguistics and Multilingual Studies, Nanyang Technological University [email protected], [email protected], [email protected], [email protected], [email protected] Abstract There has recently been a lot of interest in unsupervised methods for learning sense distributions, particularly in applications where sense distinctions are needed. This paper analyses a state-of-the-art method for sense distribution learning, and optimises it for application to the entire vocabulary of a given language. The optimised method is then used to produce LEXSEMTM: a sense frequency and semantic dataset of unprecedented size, spanning approximately 88% of polysemous, English simplex lemmas, which is released as a public resource to the community. Finally, the quality of this data is investigated, and the LEXSEMTM sense distributions are shown to be superior to those based on the WORDNET first sense for lemmas missing from SEMCOR, and at least on par with SEMCOR-based distributions otherwise. 1 Introduction Word sense disambiguation (WSD), as well as more general problems involving word senses, have been of great interest to the NLP community for many years (for a detailed overview, see Agirre and Edmonds (2007) and Navigli (2009)). In particular, there has recently been a lot of work on unsupervised techniques for these problems. This includes unsupervised methods for performing WSD (Postma et al., 2015; Chen et al., 2014; Boyd-Graber et al., 2007; Brody et al., 2006), as well as complementary problems dealing with word senses (Jin et al., 2009; Lau et al., 2014). One such application has been the automatic learning of sense distributions (McCarthy et al., 2004b; Lau et al., 2014). A sense distribution is a probability distribution over the senses of a given lemma. For example, if the noun crane had two senses, bird and machine, then a hypothetical sense distribution could indicate that the noun is expected to take the machine meaning 60% of the time and the bird meaning 40% of the time in a representative corpus. Sense distributions (or simple “first sense” information) are used widely in tasks including information extraction (Tandon et al., 2015), novel word sense detection (Lau et al., 2012; Lau et al., 2014), semi-automatic dictionary construction (Cook et al., 2013), lexical simplification (Biran et al., 2011), and textual entailment (Shnarch et al., 2011). Automatically acquired sense distributions themselves are also used to improve unsupervised WSD, for example by providing a most frequent sense heuristic (McCarthy et al., 2004b; Jin et al., 2009) or by improving unsupervised usage sampling strategies (Agirre and Martinez, 2004). Furthermore, the improvement due to the most frequent sense heuristic has been particularly strong when used with domain-specific data (Koeling et al., 2005; Chan and Ng, 2006; Lau et al., 2014). In addition, there is great scope to use these techniques to improve existing sense frequency resources, which are currently limited by the bottleneck of requiring manual sense annotation. The most prominent example of such a resource is WORDNET (Fellbaum, 1998), where the sense 1513 frequency data is based on SEMCOR (Miller et al., 1993), a 220,000 word corpus that has been manually tagged with WORDNET senses. This data is full of glaring irregularities due to its age and the limited size of the corpus; for example, the word pipe has its most frequent sense listed as tobacco pipe, whereas one might expect this to be tube carrying water or gas in modern English (McCarthy et al., 2004a). This is likely due to the more common use of the tobacco pipe sense in mid-20th century literature. The problem is particularly highlighted by the fact that out of the approximately 28,000 polysemous simplex lemmas in WORDNET 3.0, approximately 61% have no sense annotations at all, and less than half of the remaining lemmas have at least 5 sense annotations! Unfortunately, there has been a lack of work investigating how to apply sense learning techniques at the scale of a full lexical resource such as WORDNET. Updating language-wide sense frequency resources would require learning sense distributions over the entire vocabularies of languages, which could be extremely computationally expensive. To make things worse, domain differences could require learning numerous distributions per word. Despite this, though, we would not want to make these techniques scalable at the expense of sense distribution quality. Therefore, we would like to understand the tradeoff between the accuracy and computation time of these techniques, and optimise this tradeoff. This could be particularly critical in applying them in an industrial setting. The current state-of-the-art technique for unsupervised sense distribution learning is HDP-WSI (Lau et al., 2014). In order to address the above concerns, we provide a series of investigations exploring how to best optimise HDP-WSI for largescale application. We then use our optimised technique to produce LEXSEMTM,1 a semantic and sense frequency dataset of unprecedented size, spanning the entire vocabulary of English. Finally, we use crowdsourced data to produce a new set of gold-standard sense distributions to accompany LEXSEMTM. We use these to investigate the quality of the sense frequency data in LEXSEMTM with respect to SEMCOR. 1LEXSEMTM, as well as code for accessing LEXSEMTM and reproducing our experiments is available via: https://github.com/awbennett/LexSemTm 2 Background and Related Work Given the difficulty and expense of obtaining large-scale and robust annotated data, unsupervised approaches to problems involving word learning and recognising word senses have long been studied in NLP. Perhaps the most famous such problem is word sense disambiguation (WSD), for which many unsupervised solutions have been proposed. Some methods are very complex, performing WSD separately for each word usage using information such as word embeddings of surrounding words (Chen et al., 2014) or POStags (Lapata and Brew, 2004). On the other hand, most approaches make use of the difficult-to-beat most frequent sense (MFS) heuristic (McCarthy et al., 2007), which assigns each usage of a given word-type to its most frequent sense. Given the popularity of the MFS heuristic, much of the past work on unsupervised techniques has focused on identifying the most frequent sense. The original method of this kind was proposed by McCarthy et al. (2004b), which relied on finding distributionally similar words to the target word, and comparing these to the candidate senses. Most subsequent approaches have followed a similar approach, based on the words appearing nearby the target word across its token usages. Boyd-Graber and Blei (2007) formalise the method of McCarthy et al. (2004b) with a probabilistic model, while others take different approaches, such as adapting existing sense frequency data to specific domains (Chan and Ng, 2005; Chan and Ng, 2006), using coarse grained thesaurus-like sense inventories (Mohammad and Hirst, 2006), adapting information retrieval-based methods (Lapata and Keller, 2007), using ensemble learning (Brody et al., 2006), utilising the network structure of WORDNET (Boyd-Graber et al., 2007), or making use of word embeddings (Bhingardive et al., 2015). Alternatively, Jin et al. (2009) focus on how best to use the MFS heuristic, by identifying when best to apply it, based on sense distribution entropy. Perhaps the most promising approach is that of Lau et al. (2014), due to its state-of-the art performance, and the fact that it can easily by applied to any language and any sense repository containing sense glosses. The task we are interested in — namely, sense distribution learning — is in principle very similar to identifying the MFS. Indeed, of these methods for identifying the MFS, some of them are 1514 explicitly described in terms of sense distribution learning (Chan and Ng, 2005; Chan and Ng, 2006; Lau et al., 2014), while the others implicitly learn sense distributions by calculating some kind of scores used to rank senses. The state-of-the-art technique of Lau et al. (2014) that we are building upon involves performing unsupervised word sense induction (WSI), which itself is implemented using nonparametric HDP (Teh et al., 2006) topic models, as detailed in Section 3. The WSI component, HDP-WSI, is based on the work of Lau et al. (2012), which at the time was state-of-the-art. Since then, however, other competitive WSI approaches have been developed, involving complex structures such as multi-layer topic models (Chang et al., 2014), or complex word embedding based approaches (Neelakantan et al., 2014). We have not used these approaches in this work on account of their complexity and likely computational cost, however we believe they are worth future exploration. On the other hand, because HDP-WSI is implemented using topic models, it can be customised by replacing HDP with newer, more efficient topic modelling algorithms. Recent work has produced more advanced topic modelling approaches, some of which are extensions of existing approaches using more advanced learning algorithms or expanded models (Buntine and Mishra, 2014), while others are more novel, involving variations such as neural networks (Larochelle and Murray, 2011; Cao et al., 2015), or incorporating distributional similarity of words (Xie et al., 2015). Of these approaches, we chose to experiment with that of Buntine and Mishra (2014) because a working implementation was readily available, it has previously shown very strong performance in terms of accuracy and speed, and it is similar to HDP and thus easy to incorporate into our work. 3 HDP-WSI Sense Learning HDP-WSI (Lau et al., 2014) is a state-of-theart unsupervised method for learning sense distributions, given a sense repository with per-sense glosses. It takes as input a collection of example usages of the target lemma2 and the glosses 2Except where stated otherwise, a lemma usage includes the sentence containing the lemma, and the two immediate neighbouring sentences (if available). It is assumed that each usage has been normalised via lemmatisation and stopword removal, and extra local-context tokens are added, as was for each target sense, and produces a probability distribution over the target senses. At the heart of HDP-WSI is HDP (Teh et al., 2006), a nonparametric topic modelling technique. It is a generative probabilistic model and uses topics as a latent variable to allow statistical sharing between documents, providing a kind of softclustering mixture model. Each document is assumed to have a corresponding distribution over these topics, and each topic is assumed to have a corresponding distribution over words. According to the model, each word for a given document is independently generated by first sampling a topic according to that document’s distribution over topics, and then sampling a word according to the topic’s distribution over words. Unlike older topic modelling methods such as LDA (Blei et al., 2003), HDP is nonparametric, meaning the number of topics used by the model is automatically learnt, and does not need to be set as a hyperparameter. In other words, the model automatically learns the “right” number of topics for each lemma. HDP-WSI follows a two-step process: word sense induction (WSI), followed by topic–sense alignment. WSI is performed using HDP based on the earlier work of Lau et al. (2012): each usage of the target lemma is treated as a document, and HDP topic modelling is run on this document collection. This gives a variable number of learnt topics, which are the senses induced by WSI. A single topic is then assigned to each document,3 and a distribution over these topics is learnt using maximum likelihood estimation. In the second step of HDP-WSI, we align the distribution over topics from WSI to the provided sense inventory. We first create a distribution over words for each sense, from the sense’s gloss.4 Then a prevalence score is calculated for each sense by taking a weighted sum of the similarity of that sense with every topic,5 weighting each similarity score by the topic’s probability. These prevalence scores are finally normalised to give a distribution over senses. Despite state-of-the-art results with HDP-WSI in past work (Lau et al., 2014), there are some condone by Lau et al. (2012). 3The topic with the maximum probability is assigned. 4As with the lemma usages, the text is normalised via lemmatisation and stopword removal. Then a distribution is created using maximum likelihood estimation. 5Defined in terms of Jensen Shannon divergence between the respective distributions over words. 1515 cerns in applying it to large-scale learning. Most importantly, in order to make HDP nonparametric, it relies on relatively inefficient MCMC sampling techniques, typically based on a hierarchical Chinese Restaurant Process (“CRP”). On the other hand, recent work has provided very efficient topic modelling techniques given a fixed number of topics. While in previous work it was assumed that performance benefits of HDP over other techniques like LDA were based on it learning the “right” number of topics (Lau et al., 2012; Lau et al., 2014), more recent work challenges this assumption. Rather, it is suggested that it is more important for topic modelling to use high-performance learning algorithms so that topics are learnt in correct proportions, in which case “junk” topics can easily be ignored (Buntine and Mishra, 2014). In other words, it is likely that the previously-found performance advantage of HDP over LDA was actually due to properties of their respective Gibbs sampling algorithms. Furthermore, in our experience using it for sense distribution learning, HDP seems to use a very consistent number of topics. In experiments we ran on the BNC6 — the same dataset that Lau et al. (2014) based their experiments on — the number of topics was between 5 and 10 over 80% of the time, and over 99% of the time it was below 14. Because the number of topics is so consistent, it is likely we can safely use a fixed number with little risk that it will be too low. In addition, there are some theoretical concerns with HDP. Firstly, it models topic and word allocations using Dirichlet Processes (Teh et al., 2006). However, previous research has shown that phenomena such as word and sense frequencies follow power-law distributions according to Zipf’s law (Piantadosi, 2014), and thus are better modelled using Pitman-Yor Processes (Pitman and Yor, 1997). Another weakness is that HDP does not model burstiness. This is a phenomenon where words that occur at least once in a given discourse are disproportionately more likely to occur several times, even compared with other discourses about the same topic (Church, 2000; Doyle and Elkan, 2009). 6The British National Corpus (Burnard, 1995), which is a balanced corpus of English. 4 HCA-WSI Sense Learning We now present and evaluate HCA-WSI, which is an alternative to HDP-WSI that addresses the above concerns. It follows the same process as HDP-WSI, except that the HDP topic modelling is replaced with HCA7 (Buntine and Mishra, 2014), a more advanced software suite for topic modelling.8 HCA is based on a similar probabilistic model to HDP, except for a few differences: (1) it only has a fixed number of topics; (2) it models word frequencies using a more general PitmanYor Process; and (3) it incorporates an extra component to the model to model burstiness (each document can individually have an elevated probability for some words, regardless of its distribution over topics). The second and third of these differences directly answer our theoretical concerns about using HDP. The learning algorithm for HCA is called “table indicator sampling” (Chen et al., 2011), which is a collapsed Gibbs sampling algorithm. The overall probabilistic model is interpreted as a hierarchical CRP, and some extra latent variables called table indicators are added to the model, which encode the decisions made about creating new tables during the CRP. The use of these latent variables allows for a very efficient collapsed Gibbs sampling process, which is found to converge more quickly than competing Gibbs sampling and variational Bayes techniques. The convergence is also shown to be more accurate, with topic models of lower perplexity being produced given the same underlying stochastic model. Compared to HDP, HCA has been shown to be orders of magnitude faster, with similar memory overhead (Buntine and Mishra, 2014). Therefore, as long as the quality of the sense distributions given by HCA-WSI are no worse than those from HDP-WSI, it should be worthwhile switching in terms of scalability. This massive reduction in computation time would be of particular benefit to our intended large-scale application. 4.1 Evaluation We evaluate HCA-WSI in comparison to HDPWSI using one of the sense tagged datasets of 7Version 0.61, obtained from: http://www.mloss.org/software/view/527 8For simplicity we use HCA to refer to both the topic modelling algorithm implemented by Buntine and Mishra (2014) as well as the corresponding software suite, whereas elsewhere HCA often only refers to the software. 1516 0 20 40 102 103 104 Number of Lemma Usages (1000’s) Topic Model Training Time (s) Training Time for HDP-WSI vs. HCA-WSI HDP-WSI HCA-WSI Figure 1: Comparison of the time taken to train the topic models of HDP-WSI and HCA-WSI for each lemma in the BNC dataset. For each method, one data point is plotted per lemma. Koeling et al. (2005),9 which was also used by Lau et al. (2014). This dataset consists of 40 English lemmas, and for each lemma it contains a set of usages of varying size from the BNC and a gold-standard sense distribution that was created by hand-annotating a subset of the usages with WORDNET 1.7 senses. Using this dataset, we can calculate the quality of a candidate sense distribution by calculating its Jensen Shannon divergence (JSD) with respect to the corresponding gold-standard distribution. JSD is a measure of dissimilarity between two probability distributions, so a lower JSD score means the distribution is more similar to the goldstandard, and is therefore assumed to be of higher quality. Given our finding on topic counts in Section 3, HCA was run using a fixed number of 10 topics. Other settings were configured as recommended in the HCA documentation, or according to the HDP settings used by Lau et al. (2014).10 This setup is also used in subsequent experiments, except where stated otherwise. We proceeded by calculating the JSD scores of all lemmas in this dataset, using both methods. We performed a Wilcoxon signed-rank test on the two 9Koeling et al. (2005) also produced domain-specific datasets for the same lemmas, however in order to keep our analysis focussed we only use the domain-neutral BNC dataset. 10Initial values for concentration and discount parameters for burstiness were set to 100 and 0.5 respectively, and the number of iterations was set to 300. Other hyperparameters were left with default values. 0 500 1,000 10 12 14 Iteration Number log2(perplexity) HCA Topic Model Convergence Figure 2: Convergence of log-perplexity of toic model for BNC dataset lemmas, using HCA-WSI. One line per lemma. sequences of JSD scores, in order to test the hypothesis that switching to HCA-WSI has a systematic impact on sense distribution quality. We found that the mean JSD score for HDP-WSI was 0.209 ± 0.116, slightly lower than the mean JSD score for HCA-WSI of 0.211 ± 0.117. However the two-sided p-value from the test was 0.221, which is insignificant at any reasonable decision threshold. In addition, we compared the time taken11 to run topic modelling for every lemma using both methods, the results of which are displayed in Figure 1. These results show that the computation time of HCA-WSI is consistently lower than that of HDP-WSI, by over an order of magnitude. We conclude that HCA-WSI is far more computationally efficient than HDP-WSI, and there is no significant evidence that it gives worse sense distributions. Therefore, HCA-WSI is used instead of HDP-WSI for the remainder of the paper. 5 Large-Scale Learning with HCA-WSI In order to apply HCA-WSI sense distribution learning on a language-wide scale, we need to understand how to optimise it to achieve a reasonable tradeoff between efficiency and sense distribution quality. Most pertinently, we need to know how many lemma usages and iterations of Gibbs sampling are needed for high-quality results, and whether this varies for different kinds of lemmas. To this end, we run experiments ex11All benchmarking experiments were run using separate cores on Intel Xeon CPU E5-4650L processors, on a Dell R820 server with 503GiB of main memory. 1517 0 20 40 −0.02 0 0.02 0.04 Number of Usages (1000’s) JSD-Mean Difference JSD-Mean Convergence Figure 3: Convergence of mean JSD score for BNC dataset lemmas, using HCA-WSI. One line plotted per lemma, one data-point per bin. For each datapoint, the difference between mean JSD within that bin and within the final bin of the lemma is plotted. ploring how HCA-WSI converges over increasing numbers of lemma usages and topic model iterations. These experiments are all performed using the BNC dataset (see Section 4.1). In order to explore the convergence of HCAWSI over Gibbs sampling iterations, we trained HCA topic models for each lemma in the BNC dataset over a large number of iterations. The results of this are displayed in Figure 2, which shows the convergence of log-perplexity for each lemma. We conclude that around 300 iterations of sampling appears to be sufficient for convergence in the vast majority of cases. Next, we explored the convergence of HCAWSI over lemma usages by subsampling from our training data. For each lemma in the BNC dataset, we created a large number of sense distributions using random subsets of the lemma’s usages.12 Each distribution was generated by randomly selecting a number of usages between a minimum of 500 and the maximum available (uniformly), and randomly sampling that many usages without replacement. From these usages the sense distribution was created using HCA-WSI, and its JSD score relative to the gold-standard was calculated (as in Section 4.1). Finally, the results for each lemma were partitioned into 40 bins of approximately equal size, according to the number of usages sampled. 12Approximately 580 random sense distributions were created per lemma. The results of our subsampling experiment are plotted in Figure 3, which shows the convergence of mean JSD score for each lemma. We conclude from this that around 5,000–10,000 usages seem to be necessary for convergent results, and that this is fairly consistent across lemmas.13 6 LEXSEMTM Dataset We now discuss the creation of the LEXSEMTM (“Lexical Semantic Topic Models”) dataset, which contains trained topic models for the majority of simplex English lemmas. These can be aligned to any sense repository with glosses to produce sense distributions, or used directly in other applications. In addition, the dataset contains distributions over WORDNET 3.0 senses. In order to produce domain-neutral sense distributions reflecting usage in modern English, we sampled all lemma usages from English Wikipedia.14 Our Wikipedia corpus was tokenised and POS-tagged using OpenNLP and lemmatised using Morpha (Minnen et al., 2001). We trained topic models for every simplex lemma in WORDNET 3.0 with at least 20 usages in our processed Wikipedia corpus. This included lemmas for all POS (nouns, verbs, adjectives, and adverbs), and also nonpolysemous lemmas. In Section 5, we concluded that approximately 5,000–10,000 usages were needed for convergent results with the BNC dataset. On the other hand, given that we are working on a different corpus and with a wider range of lemmas there is uncertainty in this number, so we conservatively sampled up to 40,000 usages per lemma, if available. These usages were sampled from the corpus by locating all sentences where either the surface or lemmatised forms of the sentence contained the target lemma, along with a matching POStag. Processing of lemma usages was done almost identically to Lau et al. (2014). However, because we found the usages contained substantially fewer tokens on average compared to the BNC dataset, we included two sentences rather than one on either side of the target lemma location where possible (giving 5 sentences in total), which gave a 13We also ran extensive experiments to test the impact of training single topic models over multiple lemmas, using a wide variety of sampling methods, but found the impact to be neutral at best in terms of both the quality of the learned sense distributions and the overall computational cost. 14The English Wikipedia dump is dated 2009-11-28. 1518 better match in usage size. Topic models were trained using HCA, using almost the same setup as described in Section 4.1. However, since some highly-polysemous lemmas may require a greater number of topics than the lemmas in the BNC dataset, we conservatively increased the number of topics used from 10 to 20. We similarly increased the number of Gibbs sampling iterations from 300 to 1,000.15 Finally, for each polysemous lemma that we trained a topic model for, we also produced a sense distribution over WORDNET 3.0 senses, using the default topic–sense alignment method discussed in Section 3. In total, 62,721 lemmas were processed, and 8,801 of these had the desired number of at least 5,000 usages. Counting only polysemous lemmas for which we also provide sense distributions, 25,155 were processed in total, and 6,853 of these had at least 5,000 usages. This works out to approximately 88% coverage of polysemous WORDNET 3.0 lemmas in total, or 24% coverage with at least 5,000 usages (as compared to 39% coverage by lemmas in SEMCOR, or 17% with at least 5 sense-tagged occurrences in SEMCOR). 7 Evaluation of LEXSEMTM against SEMCOR Our final major contribution is an analysis of how our LEXSEMTM sense distributions compare with SEMCOR. We produce a new set of goldstandard sense distributions for a diverse set of simplex English lemmas tagged with WORDNET 3.0 senses, created using crowdsourced annotations of English Wikipedia usages. We use these gold-standard distributions to investigate when LEXSEMTM should be used in place of SEMCOR, and release them as a public resource, to facilitate the evaluation of future work involving LEXSEMTM. 7.1 Gold-Standard Distributions One of our goals in creating this dataset was to determine whether there is a SEMCOR frequency cutoff,16 below which our LEXSEMTM distributions are clearly more accurate than SEMCOR. In order to have a diverse set of lemmas and be able 15These changes had a very minor impact on the HCAWSI evaluation results obtained in Section 4.1, with an average increase in JSD of 0.001 ± 0.004. 16The number of sense annotations in SEMCOR. to address this question, we partitioned the lemmas in WORDNET 3.0 based on SEMCOR frequency. In order to keep analysis simple and consistent with previous investigations, we first filtered out multiword lemmas, nonpolysemous lemmas, and non-nouns.17 Next, since in Section 5 we decided that at least around 5,000 usages were needed for stable and converged sense distributions, we filtered out all lemmas without at least 5,000 usages in our English Wikipedia corpus. The remaining lemmas were then split into 5 groups of approximately equal size based on SEMCOR frequency. The SEMCOR frequencies contained in each group are summarised in Table 1. From each of the SEMCOR frequency groups, we randomly sampled 10 lemmas, giving 50 lemmas in total. Then for each lemma, we randomly sampled 100 usages to be annotated from English Wikipedia. This was done in the same way as the sampling of lemma usages for LEXSEMTM (see Section 6). We obtained crowdsourced sense annotations for each lemma using Amazon Mechanical Turk (AMT: Callison-Burch and Dredze (2010)). The sentences for each lemma were split into 4 batches (25 sentences per batch). In addition, two control sentences18 were created for each lemma, and added to each corresponding batch. Each batch of 27 items was annotated separately by 10 annotators. For each item to be annotated, annotators were provided with the sentence containing the lemma, the gloss for each sense as listed in WORDNET 3.019 and a list of hypernyms and synonyms for each sense. Annotators were asked to assign each item to exactly one sense. From these crowdsourced annotations, our gold-standard sense distributions were created using MACE (Hovy et al., 2013), which is a generalpurpose tool for inferring item labels from multiannotator, multi-item tasks. It provides a Bayesian framework for modelling item annotations, modelling the individual biases of each annotator, and 17We chose to restrict our scope in this evaluation to nouns because much of the prior work has also focussed on nouns, and these are the words we would expect others to care the most about disambiguating, since they are more often context bearing. Also, introducing other POS would require a greater quantity of expensive annotated data. 18These were created manually, to be as clear and unambiguous as possible. 19Example sentences were removed only if they were for a different lemma within the corresponding synset. 1519 supports semi-supervised training. MACE was run separately on the usage annotations of each lemma, with the control sentences included to guide training. Gold-standard sense distributions were obtained from the output of MACE, which includes a list containing the mode label of each item. For each lemma, we removed the control sentence labels from this list, and constructed the goldstandard distribution from the remaining labels using maximum likelihood estimation. 7.2 Evaluation of LEXSEMTM We now use these gold-standard distributions to evaluate the sense distributions in LEXSEMTM relative to SEMCOR. For each of the 50 lemmas that we created gold-standard distributions for, we evaluate the corresponding LEXSEMTM distribution against the gold-standard. In addition, we create benchmark sense distributions for each lemma from SEMCOR counts using maximum likelihood estimation,20 which we also evaluate against the gold-standards. Evaluation of sense distribution quality using gold-standard distributions is done by calculating JSD, as in Section 4.1. First, we performed this comparison of LEXSEMTM to SEMCOR JSD scores for all 50 lemmas at once. As in Section 4.1, we calculated the JSD scores for every lemma using each method individually, and compared the difference in values pairwise for statistical significance using a Wilcoxon signed-rank test. The results of this comparison are detailed in Table 1 (final row: Group = All), which shows that JSD is clearly lower for LEXSEMTM distributions compared to SEMCOR, as would be hoped. This difference is statistically significant at p < 0.05. We then performed the same comparison separately within each SEMCOR frequency group (Table 1). First of all, we can see that LEXSEMTM sense distributions strongly outperform SEMCORbased distributions in Group 1 (lemmas missing from SEMCOR). This is as would be expected, since the SEMCOR-based distributions for this group are based on which sense is listed first in WORDNET, which in the absence of SEMCOR counts is arbitrary. On the other hand, in all other groups (lemmas in SEMCOR) the difference between LEXSEMTM and SEMCOR is not statisti20For lemmas with no SEMCOR annotations, we assign one count to the first-listed sense in WORDNET 3.0. cally significant (p > 0.1 in all cases). This still remains true when we pool together the results from these groups (second last row of Table 1: Group = 2–5). While it appears that LEXSEMTM may still be outperforming SEMCOR on average over these groups (lower JSD on average), we do not have enough statistical power to be sure, given the high variance. Returning to the initial question regarding a SEMCOR frequency cutoff, the only strong conclusion we can make is that LEXSEMTM is clearly superior for lemmas missing from SEMCOR. Although it appears that LEXSEMTM may outperform SEMCOR for lemmas with higher SEMCOR frequencies, the variance in our results is too high to be sure of this, let alone define a frequency cutoff. However, given that LEXSEMTM sense distributions never appear to be worse than SEMCOR-based distributions, regardless of SEMCOR frequency — and may actually be marginally superior — it seems reasonable to use our sense distributions in general in place of SEMCOR. We can contrast this result to the findings of McCarthy et al. (2007), who found that the automatic first sense learning method of McCarthy et al. (2004b) outperformed SEMCOR for words with SEMCOR frequency less than 5. However, their analysis was based on the accuracy of the first sense heuristic, rather than the entire sense distribution, and they used very different datasets to us.21 Furthermore, their SEMCOR frequency cutoff result was only statistically significant for some variations of their method, and they evaluated over more lemmas22 meaning that statistical significance was easier to obtain. Given these reasons, their results likely do not contradict ours. Given that LEXSEMTM contains sense frequencies for 88% of polysemous simplex lemmas in WORDNET, compared to only 39% for SEMCOR, the strong performance of our LEXSEMTM sense distributions for lemmas missing from SEMCOR is extremely significant. Technically these results are only relevant for lemmas where LEXSEMTM was trained on at least 5,000 us21Their evaluation on the all words task from SENSEVAL2, which will have more occurrences of the more frequent words, whereas ours is a lexical sample with 100 instances of each word. However, our experiment has a larger dataset (50×100 = 5000 instances, as opposed to 786 in total in the SENSEVAL-2 dataset) which makes it more reliable. 22They evaluated over 63 lemmas with SEMCOR frequency between 1 and 5, whereas we only evaluated over 14 lemmas (Group 2, and part of Group 3). 1520 Group Lemma Count SEMCOR Freqs. Mean JSD p LEXSEMTM SEMCOR 1 10 0 .100±.080 .615±.407 .013 2 10 1–3 .203±.169 .214±.250 .959 3 10 4–8 .100±.049 .103±.133 .878 4 10 9–20 .148±.069 .235±.166 .114 5 10 21+ .162±.121 .156±.131 .721 2–5 40 1+ .153±.118 .177±.184 .591 All 50 0+ .142±.113 .265±.301 .046 Table 1: Sense distribution quality for gold-standard dataset lemmas, comparing LEXSEMTM results to the SEMCOR benchmark. ages, which reduces the coverage of LEXSEMTM to 24%. However, even then this gives us sense frequencies for 1,602 polysemous lemmas missing from SEMCOR, which accounts for over 5% of polysemous simplex lemmas in WORDNET. Furthermore, based on some additional ongoing analysis comparing LEXSEMTM distributions directly to SEMCOR-based distributions across all of LEXSEMTM (not presented here), it appears the decrease in sense distribution quality for lemmas trained on fewer than 5,000 usages is on average fairly small. This is corroborated by our results in Figure 3: we can observe for the lemmas in the BNC dataset that when the number of usages was reduced to 500, the mean change in JSD for each lemma was almost always less than 0.02 and never greater than 0.04, which is small compared to the difference between LEXSEMTM and SEMCOR in each SEMCOR frequency group. This strongly suggests that our conclusions can be extended to lemmas with low LEXSEMTM frequency, though more work is needed to confirm this. 8 Discussion and Future Work The most immediate extension of our work would be to apply our sense learning method to a broader range of data. In particular, we intend to expand LEXSEMTM by applying HCA-WSI across the vocabularies of languages other than English, and also to multiword lemmas. Another obvious extension would be to further explore the alignment component of HCA-WSI. We currently use a simple approach, and we believe this process could be improved, e.g. by using word embeddings. In addition, previous work by Lau et al. (2012) and Lau et al. (2014) also provided methods for detecting novel and unattested senses, using the topic modelling output from the WSI step of HDP-WSI. These could be applied with LEXSEMTM— which contains this WSI output as well as sense frequencies — to search for novel and unattested senses throughout the entire vocabulary of English. This could be used to expand existing sense inventories with new senses, for example using the methodology of Cook et al. (2013). Given that LEXSEMTM also contains WSI output for nonpolysemous WORDNET lemmas (37,566 in total), this could be lead to the discovery of many new polysemous lemmas. In conclusion, we have created extensive resources for future work in NLP and related disciplines. We have produced LEXSEMTM, which was trained on English Wikipedia and spans approximately 88% of polysemous English lemmas. This dataset contains sense distributions for the majority of polysemous lemmas in WORDNET 3.0. It also contains lemma topic models, for both polysemous and nonpolysemous lemmas, which provide rich semantic information about lemma usage, and can be re-aligned to sense inventories to produce new sense distributions at trivial cost. In addition, we have produced goldstandard distributions for a subset of the lemmas in LEXSEMTM, which we have used to demonstrate that LEXSEMTM sense distributions are at least on-par with those based on SEMCOR for lemmas with a reasonable frequency in Wikipedia, and strongly superior for lemmas missing from SEMCOR. Finally, we demonstrated that HCA topic modelling is more efficient than HDP, providing guidance for others who wish to do large-scale unsupervised sense distribution learning. Acknowledgements This work was supported in part by a Google Cloud Platform award. 1521 References Eneko Agirre and Philip Edmonds. 2007. Word Sense Disambiguation: Algorithms and Applications. Springer, Dordrecht, Netherlands. Eneko Agirre and David Martinez. 2004. Unsupervised WSD based on automatically retrieved examples: The importance of bias. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004), pages 25–32, Barcelona, Spain. Sudha Bhingardive, Dhirendra Singh, V. Rudramurthy, Hanumant Redkar, and Pushpak Bhattacharyya. 2015. Unsupervised most frequent sense detection using word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1238–1243, Denver, USA. Or Biran, Samuel Brody, and Noemie Elhadad. 2011. Putting it simply: a context-aware approach to lexical simplification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 496–501, Portland, USA. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. The Journal of Machine Learning Research, 3:993–1022. Jordan Boyd-Graber and David Blei. 2007. PUTOP: Turning predominant senses into a topic model for word sense disambiguation. In Proceedings of the 4th International Workshop on Semantic Evaluation (SemEval-2007), pages 277–281, Prague, Czech Republic. Jordan Boyd-Graber, David Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1024–1033, Prague, Czech Republic. Samuel Brody, Roberto Navigli, and Mirella Lapata. 2006. Ensemble methods for unsupervised WSD. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 97–104, Sydney, Australia. Wray L Buntine and Swapnil Mishra. 2014. Experiments with non-parametric topic models. In Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2014), pages 881–890, New York City, USA. Lou Burnard. 1995. User reference guide British National Corpus version 1.0. Technical report, Oxford University Computing Services, UK. Chris Callison-Burch and Mark Dredze. 2010. Creating speech and language data with Amazon’s Mechanical Turk. In Proceedings of the North American Chapter of the Association for Computational Linguistics Human Language Technologies 2009 (NAACL 2009): Workshop on Creating Speech and Text Language Data With Amazon’s Mechanical Turk, pages 1–12, Los Angeles, USA. Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji. 2015. A novel neural topic model and its supervised extension. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, pages 2210– 2216, Austin, USA. Yee Seng Chan and Hwee Tou Ng. 2005. Word sense disambiguation with distribution estimation. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, pages 1010–1015, Edinburgh, UK. Yee Seng Chan and Hwee Tou Ng. 2006. Estimating class priors in domain adaptation for word sense disambiguation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 89–96, Sydney, Australia. Baobao Chang, Wenzhe Pei, and Miaohong Chen. 2014. Inducing word sense with automatically learned hidden concepts. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 355–364, Dublin, Ireland. Changyou Chen, Lan Du, and Wray Buntine. 2011. Sampling table configurations for the hierarchical Poisson-Dirichlet process. In Machine Learning and Knowledge Discovery in Databases, volume 6912, pages 296–311. Springer. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025–1035, Doha, Qatar. Kenneth W. Church. 2000. Empirical estimates of adaptation: The chance of two noriegas is closer to p/2 than p2. In Proceedings of the 18th International Conference on Computational Linguistics (COLING 2000), pages 180–186, Saarbr¨ucken, Germany. Paul Cook, Jey Han Lau, Michael Rundell, Diana McCarthy, and Timothy Baldwin. 2013. A lexicographic appraisal of an automatic approach for detecting new word senses. In Proceedings of eLex 2013, pages 49–65, Tallinn, Estonia. Gabriel Doyle and Charles Elkan. 2009. Accounting for burstiness in topic models. In Proceedings of the 26th Annual International Conference on Machine 1522 Learning (ICML 2009), pages 281–288, Montreal, Canada. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, USA. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120–1130, Atlanta, USA. Peng Jin, Diana McCarthy, Rob Koeling, and John Carroll. 2009. Estimating and exploiting the entropy of sense distributions. In Proceedings of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT 2009): Short Papers, pages 233–236, Boulder, USA. Rob Koeling, Diana McCarthy, and John Carroll. 2005. Domain-specific sense distributions and predominant sense acquisition. In Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing (EMNLP 2005), pages 419– 426, Vancouver, Canada. Mirella Lapata and Chris Brew. 2004. Verb class disambiguation using informative priors. Computational Linguistics, 30(1):45–73. Mirella Lapata and Frank Keller. 2007. An information retrieval approach to sense ranking. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 348–355, Rochester, USA. Hugo Larochelle and Iain Murray. 2011. The neural autoregressive distribution estimator. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011), pages 29–37, Fort Lauderdale, USA. Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proceedings of the 13th Conference of the EACL (EACL 2012), pages 591–601, Avignon, France. Jey Han Lau, Paul Cook, Diana McCarthy, Spandana Gella, and Timothy Baldwin. 2014. Learning word sense distributions, detecting unattested senses and identifying novel senses using topic models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pages 259–270, Baltimore, USA. Diana McCarthy, Rob Koeling, and Julie Weeds. 2004a. Ranking WordNet senses automatically. Technical Report 569, Department of Informatics, University of Sussex. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004b. Finding predominant word senses in untagged text. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004), pages 280–287, Barcelona, Spain. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2007. Unsupervised acquisition of predominant word senses. Computational Linguistics, 33(4):553–590. George A Miller, Claudia Leacock, and Randee Tengi. 1993. A semantic concordance. In Proceedings of the ARPA Workshop on Human Language Technology, pages 303–308, Plainsboro, USA. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natural Language Engineering, 7(3):207–223. Saif Mohammad and Graeme Hirst. 2006. Determining word sense dominance using a thesaurus. In Proceedings of the 11th Conference of the EACL (EACL 2006), pages 121–128, Trento, Italy. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41:1–69. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1059–1069, Doha, Qatar. Steven T Piantadosi. 2014. Zipf’s word frequency law in natural language: A critical review and future directions. Psychonomic Bulletin & Review, 21(5):1112–1130. Jim Pitman and Marc Yor. 1997. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability, pages 855– 900. Marten Postma, Ruben Izquierdo, and Piek Vossen. 2015. VUA-background : When to use background information to perform word sense disambiguation. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval-2015), pages 345– 349, Denver, USA, June. Eyal Shnarch, Jacob Goldberger, and Ido Dagan. 2011. A probabilistic modeling framework for lexical entailment. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 558– 563, Portland, USA. Niket Tandon, Gerard de Melo, Abir De, and Gerhard Weikum. 2015. Lights, camera, action: Knowledge extraction from movie scripts. In Proceedings of the 24th International Conference on World Wide Web, pages 127–128, Florence, Italy. 1523 Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581. Pengtao Xie, Diyi Yang, and Eric Xing. 2015. Incorporating word correlation knowledge into topic modeling. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 725–734, Denver, USA. 1524
2016
143
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1525–1534, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics The LAMBADA dataset: Word prediction requiring a broad discourse context∗ Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham†, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fern´andez‡ CIMeC - Center for Mind/Brain Sciences, University of Trento {firstname.lastname}@unitn.it, †[email protected] ‡Institute for Logic, Language & Computation, University of Amsterdam [email protected] Abstract We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-ofthe-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text. 1 Introduction The recent spurt of powerful end-to-end-trained neural networks for Natural Language Processing (Hermann et al., 2015; Rockt¨aschel et al., 2016; Weston et al., 2015, a.o.) has sparked interest in tasks to measure the progress they are bringing about in genuine language understanding. Special care must be taken in evaluating such systems, since their effectiveness at picking statistical generalizations from large corpora can lead to the illusion that they are reaching a deeper degree of understanding than they really are. For example, ∗Denis and Germ´an share first authorship. Marco, Gemma, and Raquel share senior authorship. the end-to-end system of Vinyals and Le (2015), trained on large conversational datasets, produces dialogues such as the following: (1) Human: what is your job? Machine: i’m a lawyer Human: what do you do? Machine: i’m a doctor Separately, the system responses are appropriate for the respective questions. However, when taken together, they are incoherent. The system behaviour is somewhat parrot-like. It can locally produce perfectly sensible language fragments, but it fails to take the meaning of the broader discourse context into account. Much research effort has consequently focused on designing systems able to keep information from the broader context into memory, and possibly even perform simple forms of reasoning about it (Hermann et al., 2015; Hochreiter and Schmidhuber, 1997; Ji et al., 2015; Mikolov et al., 2015; Sordoni et al., 2015; Sukhbaatar et al., 2015; Wang and Cho, 2015, a.o.). In this paper, we introduce the LAMBADA dataset (LAnguage Modeling Broadened to Account for Discourse Aspects). LAMBADA proposes a word prediction task where the target item is difficult to guess (for English speakers) when only the sentence in which it appears is available, but becomes easy when a broader context is presented. Consider Example (1) in Figure 1. The sentence Do you honestly think that I would want you to have a ? has a multitude of possible continuations, but the broad context clearly indicates that the missing word is miscarriage. LAMBADA casts language understanding in the classic word prediction framework of language modeling. We can thus use it to test several existing language modeling architectures, including 1525 systems with capacity to hold longer-term contextual memories. In our preliminary experiments, none of these models came even remotely close to human performance, confirming that LAMBADA is a challenging benchmark for research on automated models of natural language understanding. 2 Related datasets The CNN/Daily Mail (CNNDM) benchmark recently introduced by Hermann et al. (2015) is closely related to LAMBADA. CNNDM includes a large set of online articles that are published together with short summaries of their main points. The task is to guess a named entity that has been removed from one such summary. Although the data are not normed by subjects, it is unlikely that the missing named entity can be guessed from the short summary alone, and thus, like in LAMBADA, models need to look at the broader context (the article). Differences between the two datasets include text genres (news vs. novels; see Section 3.1) and the fact that missing items in CNNDM are limited to named entities. Most importantly, the two datasets require models to perform different kinds of inferences over broader passages. For CNNDM, models must be able to summarize the articles, in order to make sense of the sentence containing the missing word, whereas in LAMBADA the last sentence is not a summary of the broader passage, but a continuation of the same story. Thus, in order to succeed, models must instead understand what is a plausible development of a narrative fragment or a dialogue. Another related benchmark, CBT, has been introduced by Hill et al. (2016). Like LAMBADA, CBT is a collection of book excerpts, with one word randomly removed from the last sentence in a sequence of 21 sentences. While there are other design differences, the crucial distinction between CBT and LAMBADA is that the CBT passages were not filtered to be human-guessable in the broader context only. Indeed, according to the post-hoc analysis of a sample of CBT passages reported by Hill and colleagues, in a large proportion of cases in which annotators could guess the missing word from the broader context, they could also guess it from the last sentence alone. At the same time, in about one fifth of the cases, the annotators could not guess the word even when the broader context was given. Thus, only a small portion of the CBT passages are really probing the model’s ability to understand the broader context, which is instead the focus of LAMBADA. The idea of a book excerpt completion task was originally introduced in the MSRCC dataset (Zweig and Burges, 2011). However, the latter limited context to single sentences, not attempting to measure broader passage understanding. Of course, text understanding can be tested through other tasks, including entailment detection (Bowman et al., 2015), answering questions about a text (Richardson et al., 2013; Weston et al., 2015) and measuring inter-clause coherence (Yin and Sch¨utze, 2015). While different tasks can provide complementary insights into the models’ abilities, we find word prediction particularly attractive because of its naturalness (it’s easy to norm the data with non-expert humans) and simplicity. Models just need to be trained to predict the most likely word given the previous context, following the classic language modeling paradigm, which is a much simpler setup than the one required, say, to determine whether two sentences entail each other. Moreover, models can have access to virtually unlimited amounts of training data, as all that is required to train a language model is raw text. On a more general methodological level, word prediction has the potential to probe almost any aspect of text understanding, including but not limited to traditional narrower tasks such as entailment, co-reference resolution or word sense disambiguation. 3 The LAMBADA dataset 3.1 Data collection1 LAMBADA consists of passages composed of a context (on average 4.6 sentences) and a target sentence. The context size is the minimum number of complete sentences before the target sentence such that they cumulatively contain at least 50 tokens (this size was chosen in a pilot study). The task is to guess the last word of the target sentence (the target word). The constraint that the target word be the last word of the sentence, while not necessary for our research goal, makes the task more natural for human subjects. The LAMBADA data come from the Book Corpus (Zhu et al., 2015). The fact that it contains unpublished novels minimizes the potential 1Further technical details are provided in the Supplementary Material (SM): http://clic.cimec.unitn.it/ lambada/ 1526 (1) Context: “Yes, I thought I was going to lose the baby.” “I was scared too,” he stated, sincerity flooding his eyes. “You were ?” “Yes, of course. Why do you even ask?” “This baby wasn’t exactly planned for.” Target sentence: “Do you honestly think that I would want you to have a ?” Target word: miscarriage (2) Context: “Why?” “I would have thought you’d find him rather dry,” she said. “I don’t know about that,” said Gabriel. “He was a great craftsman,” said Heather. “That he was,” said Flannery. Target sentence: “And Polish, to boot,” said . Target word: Gabriel (3) Context: Preston had been the last person to wear those chains, and I knew what I’d see and feel if they were slipped onto my skin-the Reaper’s unending hatred of me. I’d felt enough of that emotion already in the amphitheater. I didn’t want to feel anymore. “Don’t put those on me,” I whispered. “Please.” Target sentence: Sergei looked at me, surprised by my low, raspy please, but he put down the . Target word: chains (4) Context: They tuned, discussed for a moment, then struck up a lively jig. Everyone joined in, turning the courtyard into an even more chaotic scene, people now dancing in circles, swinging and spinning in circles, everyone making up their own dance steps. I felt my feet tapping, my body wanting to move. Target sentence: Aside from writing, I ’ve always loved . Target word: dancing (5) Context: He shook his head, took a step back and held his hands up as he tried to smile without losing a cigarette. “Yes you can,” Julia said in a reassuring voice. “I ’ve already focused on my friend. You just have to click the shutter, on top, here.” Target sentence: He nodded sheepishly, through his cigarette away and took the . Target word: camera (6) Context: In my palm is a clear stone, and inside it is a small ivory statuette. A guardian angel. “Figured if you’re going to be out at night getting hit by cars, you might as well have some backup.” I look at him, feeling stunned. Like this is some sort of sign. Target sentence: But as I stare at Harlin, his mouth curved in a confident grin, I don’t care about . Target word: signs (7) Context: Both its sun-speckled shade and the cool grass beneath were a welcome respite after the stifling kitchen, and I was glad to relax against the tree’s rough, brittle bark and begin my breakfast of buttery, toasted bread and fresh fruit. Even the water was tasty, it was so clean and cold. Target sentence: It almost made up for the lack of . Target word: coffee (8) Context: My wife refused to allow me to come to Hong Kong when the plague was at its height and –” “Your wife, Johanne? You are married at last ?” Johanne grinned. “Well, when a man gets to my age, he starts to need a few home comforts. Target sentence: After my dear mother passed away ten years ago now, I became . Target word: lonely (9) Context: “Again, he left that up to you. However, he was adamant in his desire that it remain a private ceremony. He asked me to make sure, for instance, that no information be given to the newspaper regarding his death, not even an obituary. Target sentence: I got the sense that he didn’t want anyone, aside from the three of us, to know that he’d even . Target word: died (10) Context: The battery on Logan’s radio must have been on the way out. So he told himself. There was no other explanation beyond Cygan and the staff at the White House having been overrun. Lizzie opened her eyes with a flutter. They had been on the icy road for an hour without incident. Target sentence: Jack was happy to do all of the . Target word: driving Figure 1: Examples of LAMBADA passages. Underlined words highlight when the target word (or its lemma) occurs in the context. 1527 usefulness of general world knowledge and external resources for the task, in contrast to other kinds of texts like news data, Wikipedia text, or famous novels. The corpus, after duplicate removal and filtering out of potentially offensive material with a stop word list, contains 5,325 novels and 465 million words. We randomly divided the novels into equally-sized training and development+testing partitions. We built the LAMBADA dataset from the latter, with the idea that models tackling LAMBADA should be trained on raw text from the training partition, composed of 2662 novels and encompassing more than 200M words. Because novels are pre-assigned to one of the two partitions only, LAMBADA passages are self-contained and cannot be solved by exploiting the knowledge in the remainder of the novels, for example background information about the characters involved or the properties of the fictional world in a given novel. The same novel-based division method is used to further split LAMBADA data between development and testing. To reduce time and cost of dataset collection, we filtered out passages that are relatively easy for standard language models, since such cases are likely to be guessable based on local context alone. We used a combination of four language models, chosen by availability and/or ease of training: a pre-trained recurrent neural network (RNN) (Mikolov et al., 2011) and three models trained on the Book Corpus (a standard 4-gram model, a RNN and a feed-forward model; see SM for details, and note that these are different from the models we evaluated on LAMBADA as described in Section 4 below). Any passage whose target word had probability ≥0.00175 according to any of the language models was excluded. A random sample of the remaining passages were then evaluated by human subjects through the CrowdFlower crowdsourcing service2 in three steps. For a given passage, 1. one human subject guessed the target word based on the whole passage (comprising the context and the target sentence); if the guess was right, 2. a second subject guessed the target word based on the whole passage; if that guess was also right, 3. more subjects tried to guess the target word based on the target sentence only, until the 2http://www.crowdflower.com word was guessed or the number of unsuccessful guesses reached 10; if no subject was able to guess the target word, the passage was added to the LAMBADA dataset. The subjects in step 3 were allowed 3 guesses per sentence, to maximize the chances of catching cases where the target words were guessable from the sentence alone. Step 2 was added based on a pilot study that revealed that, while step 3 was enough to ensure that the data could not be guessed with the local context only, step 1 alone did not ensure that the data were easy given the discourse context (its output includes a mix of cases ranging from obvious to relatively difficult, guessed by an especially able or lucky step-1 subject). We made sure that it was not possible for the same subject to judge the same item in both passage and sentence conditions (details in SM). In the crowdsourcing pipeline, 84–86% items were discarded at step 1, an additional 6–7% at step 2 and another 3–5% at step 3. Only about one in 25 input examples passed all the selection steps. Subjects were paid $0.22 per page in steps 1 and 2 (with 10 passages per page) and $0.15 per page in step 3 (with 20 sentences per page). Overall, each item in the resulting dataset costed $1.24 on average. Alternative designs, such as having step 3 before step 2 or before step 1, were found to be more expensive. Cost considerations also precluded us from using more subjects at stage 1, which could in principle improve the quality of filtering at this step. Note that the criteria for passage inclusion were very strict: We required two consecutive subjects to exactly match the missing word, and we made sure that no subject (out of ten) was able to provide it based on local context only, even when given 3 guesses. An alternative to this perfect-match approach would have been to include passages where broad-context subjects provided other plausible or synonymous continuations. However, it is very challenging, both practically and methodologically, to determine which answers other than the original fit the passage well, especially when the goal is to distinguish between items that are solvable in broad-discourse context and those where the local context is enough. Theoretically, substitutability in context could be tested with manual annotation by multiple additional raters, but this would not be financially or practically feasible for a dataset of this scale (human annotators received 1528 over 200,000 passages at stage 1). For this reason we went for the strict hit-or-miss approach, keeping only items that can be unambiguously determined by human subjects. 3.2 Dataset statistics The LAMBADA dataset consists of 10,022 passages, divided into 4,869 development and 5,153 test passages (extracted from 1,331 and 1,332 disjoint novels, respectively). The average passage consists of 4.6 sentences in the context plus 1 target sentence, for a total length of 75.4 tokens (dev) / 75 tokens (test). Examples of passages in the dataset are given in Figure 1. The training data for language models to be tested on LAMBADA include the full text of 2,662 novels (disjoint from those in dev+test), comprising 203 million words. Note that the training data consists of text from the same domain as the dev+test passages, in large amounts but not filtered in the same way. This is partially motivated by economic considerations (recall that each data point costs $1.24 on average), but, more importantly, it is justified by the intended use of LAMBADA as a tool to evaluate general-purpose models in terms of how they fare on broad-context understanding (just like our subjects could predict the missing words using their more general text understanding abilities), not as a resource to develop ad-hoc models only meant to predict the final word in the sort of passages encountered in LAMBADA. The development data can be used to fine-tune models to the specifics of the LAMBADA passages. 3.3 Dataset analysis Our analysis of the LAMBADA data suggests that, in order for the target word to be predictable in a broad context only, it must be strongly cued in the broader discourse. Indeed, it is typical for LAMBADA items that the target word (or its lemma) occurs in the context. Figure 2(a) compares the LAMBADA items to a random 5000-item sample from the input data, that is, the passages that were presented to human subjects in the filtering phase (we sampled from all passages passing the automated filters described in Section 3.1 above, including those that made it to LAMBADA). The figure shows that when subjects guessed the word (only) in the broad context, often the word itself occurred in the context: More than 80% of LAMBADA passages include the target word in the context, while in the input data that was the case for less than 15% of the passages. To guess the right word, however, subjects must still put their linguistic and general cognitive skills to good use, as shown by the examples featuring the target word in the context reported in Figure 1. Figure 2(b) shows that most target words in LAMBADA are proper nouns (48%), followed by common nouns (37%) and, at a distance, verbs (7.7%). In fact, proper nouns are hugely overrepresented in LAMBADA, while the other categories are under-represented, compared to the POS distribution in the input. A variety of factors converges in making proper nouns easy for subjects in the LAMBADA task. In particular, when the context clearly demands a referential expression, the constraint that the blank be filled by a single word excludes other possibilities such as noun phrases with articles, and there are reasons to suspect that co-reference is easier than other discourse phenomena in our task (see below). However, although co-reference seems to play a big role, only 0.3% of target words are pronouns. Common nouns are still pretty frequent in LAMBADA, constituting over one third of the data. Qualitative analysis reveals a mixture of phenomena. Co-reference is again quite common (see Example (3) in Figure 1), sometimes as “partial” co-reference facilitated by bridging mechanisms (shutter–camera; Example (5)) or through the presence of a near synonym (‘lose the baby’– miscarriage; Example (1)). However, we also often find other phenomena, such as the inference of prototypical participants in an event. For instance, if the passage describes someone having breakfast together with typical food and beverages (see Example (7)), subjects can guess the target word coffee without it having been explicitly mentioned. In contrast, verbs, adjectives, and adverbs are rare in LAMBADA. Many of those items can be guessed with local sentence context only, as shown in Figure 2(b), which also reports the POS distribution of the set of items that were guessed by subjects based on the target-sentence context only (step 3 in Section 3.1). Note a higher proportion of verbs, adjectives and adverbs in the latter set in Figure 2(b). While end-of-sentence context skews input distribution in favour of nouns, subject filtering does show a clear differential effect for nouns vs. other POSs. Manual inspection reveals that broad context is not necessary to guess items like 1529 LAMBADA input Target word in context not in context 0.0 0.2 0.4 0.6 0.8 1.0 PN CN V J R O LAMBADA input sentence 0.0 0.1 0.2 0.3 0.4 0.5 PN CN V J R O 0.0 0.1 0.2 0.3 0.4 0.5 (a) (b) (c) Figure 2: (a) Target word in or not in context; (b) Target word POS distribution in LAMBADA vs. data presented to human subjects (input) and items guessed with sentence context only (PN=proper noun, CN=common noun, V=verb, J=adjective, R=adverb, O=other); (c) Target word POS distribution of LAMBADA passages where the lemma of the target word is not in the context (categories as in (b)). frequent verbs (ask, answer, call), adjectives, and closed-class adverbs (now, too, well), as well as time-related adverbs (quickly, recently). In these cases, the sentence context suffices, so few of them end up in LAMBADA (although of course there are exceptions, such as Example (8), where the target word is an adjective). This contrasts with other types of open-class adverbs (e.g., innocently, confidently), which are generally hard to guess with both local and broad context. The low proportion of these kinds of adverbs and of verbs among guessed items in general suggests that tracking event-related phenomena (such as script-like sequences of events) is harder for subjects than coreferential phenomena, at least as framed in the LAMBADA task. Further research is needed to probe this hypothesis. Furthermore, we observe that, while explicit mention in the preceding discourse context is critical for proper nouns, the other categories can often be guessed without having been explicitly introduced. This is shown in Figure 2(c), which depicts the POS distribution of LAMBADA items for which the lemma of the target word is not in the context (corresponding to about 16% of LAMBADA in total).3 Qualitative analysis of items with verbs and adjectives as targets suggests that the target word, although not present in the passage, is still strongly implied by the con3The apparent 1% of out-of-context proper nouns shown in Figure 2(c) is due to lemmatization mistakes (fictional characters for which the lemmatizer did not recognize a link between singular and plural forms, e.g., Wynn – Wynns). A manual check confirmed that all proper noun target words in LAMBADA are indeed also present in the context. text. In about one third of the cases examined, the missing word is “almost there”. For instance, the passage contains a word with the same root but a different part of speech (e.g., death–died in Example (6)), or a synonymous expression (as mentioned above for “miscarriage”; we find the same phenomenon for verbs, e.g., ‘deprived you of water’–dehydrated). In other cases, correct prediction requires more complex discourse inference, including guessing prototypical participants of a scene (as in the coffee example above), actions or events strongly suggested by the discourse (see Examples (1) and (10), where the mention of an icy road helps in predicting the target driving), or qualitative properties of participants or situations (see Example (8)). Of course, the same kind of discourse reasoning takes place when the target word is already present in the context (cf. Examples (3) and (4)). The presence of the word in context does not make the reasoning unnecessary (the task remains challenging), but facilitates the inference. As a final observation, intriguingly, the LAMBADA items contain (quoted) direct speech significantly more often than the input items overall (71% of LAMBADA items vs. 61% of items in the input sample), see, e.g., Examples (1) and (2). Further analysis is needed to investigate in what way more dialogic discourse might facilitate the prediction of the final target word. In sum, LAMBADA contains a myriad of phenomena that, besides making it challenging from the text understanding perspective, are of great interest to the broad Computational Linguistics 1530 community. To return to Example (1), solving it requires a combination of linguistic skills ranging from (morpho)phonology (the plausible target word abortion is ruled out by the indefinite determiner a) through morphosyntax (the slot should be filled by a common singular noun) to pragmatics (understanding what the male participant is inferring from the female participant’s words), in addition to general reasoning skills. It is not surprising, thus, that LAMBADA is so challenging for current models, as we show next. 4 Modeling experiments Computational methods We tested several existing language models and baselines on LAMBADA. We implemented a simple RNN (Elman, 1990), a Long Short-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997), a traditional statistical N-Gram language model (Stolcke, 2002) with and without cache, and a Memory Network (Sukhbaatar et al., 2015). We remark that at least LSTM, Memory Network and, to a certain extent, the cache N-Gram model have, among their supposed benefits, the ability to take broader contexts into account. Note moreover that variants of RNNs and LSTMs are at the state of the art when tested on standard language modeling benchmarks (Mikolov, 2014). Our Memory Network implementation is similar to the one with which Hill et al. (2016) reached the best results on the CBT data set (see Section 2 above). While we could not re-implement the models that performed best on CNNDM (see again Section 2), our LSTM is architecturally similar to the Deep LSTM Reader of Hermann et al. (2015), which achieved respectable performance on that data set. Most importantly, we will show below that most of our models reach impressive performance when tested on a more standard language modeling data set sourced from the same corpus used to build LAMBADA. This control set was constructed by randomly sampling 5K passages of the same shape and size as the ones used to build LAMBADA from the same test novels, but without filtering them in any way. Based on the control set results, to be discussed below, we can reasonably claim that the models we are testing on LAMBADA are very good at standard language modeling, and their low performance on the latter cannot be attributed to poor quality. In order to test for strong biases in the data, we constructed Sup-CBOW, a baseline model weakly tailored to the task at hand, consisting of a simple neural network that takes as input a bag-ofword representation of the passage and attempts to predict the final word. The input representation comes from adding pre-trained CBOW vectors (Mikolov et al., 2013) of the words in the passage.4 We also considered an unsupervised variant (Unsup-CBOW) where the target word is predicted by cosine similarity between the passage vector and the target word vector. Finally, we evaluated several variations of a random guessing baseline differing in terms of the word pool to sample from. The guessed word could be picked from: the full vocabulary, the words that appear in the current passage and a random uppercased word from the passage. The latter baseline aims at exploiting the potential bias that proper names account for a consistent portion of the LAMBADA data (see Figure 2 above). Note that LAMBADA was designed to challenge language models with harder-than-average examples where broad context understanding is crucial. However, the average case should not be disregarded either, since we want language models to be able to handle both cases. For this reason, we trained the models entirely on unsupervised data and expect future work to follow similar principles. Concretely, we trained the models, as is standard practice, on predicting each upcoming word given the previous context, using the LAMBADA training data (see Section 3.2 above) as input corpus. The only exception to this procedure was Sup-CBOW where we extracted from the training novels similar-shaped passages to those in LAMBADA and trained the model on them (about 9M passages). Again, the goal of this model was only to test for potential biases in the data and not to provide a full account for the phenomena we are testing. We restricted the vocabulary of the models to the 60K most frequent words in the training set (covering 95% of the target words in the development set). The model hyperparameters were tuned on their accuracy in the development set. The same trained models were tested on the LAMBADA and the control sets. See SM for the tuning details. Results Results of models and baselines are reported in Table 1. Note that the measure of interest 4http://clic.cimec.unitn.it/composes/ semantic-vectors.html 1531 Data Method Acc. Ppl. Rank LAMBADA baselines Random vocabulary word 0 60000 30026 Random word from passage 1.6 Random capitalized word from passage 7.3 Unsup-CBOW 0 57040 16352 Sup-CBOW 0 47587 4660 models N-Gram 0.1 3125 993 N-Gram w/cache 0.1 768 87 RNN 0 14725 7831 LSTM 0 5357 324 Memory Network 0 16318 846 Control baselines Random vocabulary word 0 60000 30453 Random word from passage 0 Random capitalized word from passage 0 Unsup-CBOW 0 55190 12950 Sup-CBOW 3.5 2344 259 models N-Gram 19.1 285 17 N-Gram w/cache 19.1 270 18 RNN 15.4 277 24 LSTM 21.9 149 12 Memory Network 8.5 566 46 Table 1: Results of computational methods. Accuracy is expressed in percentage. for LAMBADA is the average success of a model at predicting the target word, i.e., accuracy (unlike in standard language modeling, we know that the missing LAMBADA words can be precisely predicted by humans, so good models should be able to accomplish the same feat, rather than just assigning a high probability to them). However, as we observe a bottoming effect with accuracy, we also report perplexity and median rank of correct word, to better compare the models. As anticipated above, and in line with what we expected, all our models have very good performance when called to perform a standard language modeling task on the control set. Indeed, 3 of the models (the N-Gram models and LSTM) can guess the right word in about 1/5 of the cases. The situation drastically changes if we look at the LAMBADA results, where all models are performing very badly. Indeed, no model is even able to compete with the simple heuristics of picking a random word from the passage, and, especially, a random capitalized word (easily a proper noun). At the same time, the low performance of the latter heuristic in absolute terms (7% accuracy) shows that, despite the bias in favour of names in the passage, simply relying on this will not suffice to obtain good performance on LAMBADA, and models should rather pursue deeper forms of analysis of the broader context (the Sup-CBOW baseline, attempting to directly exploit the passage in a shallow way, performs very poorly). This confirms again that the difficulty of LAMBADA relies mainly on accounting for the information available in a broader context and not on the task of predicting the exact word missing. In comparative terms (and focusing on perplexity and rank, given the uniformly low accuracy results) we observe a stronger performance of the traditional N-Gram models over the neuralnetwork-based ones, possibly pointing to the difficulty of tuning the latter properly. In particular, the best relative performance on LAMBADA is achieved by N-Gram w/cache, which takes passage statistics into account. While even this model is effectively unable to guess the right word, it achieves a respectable perplexity of 768. 1532 We recognize, of course, that the evaluation we performed is very preliminary, and it must only be taken as a proof-of-concept study of the difficulty of LAMBADA. Better results might be obtained simply by performing more extensive tuning, by adding more sophisticated mechanisms such as attention (Bahdanau et al., 2014), and so forth. Still, we would be surprised if minor modifications of the models we tested led to human-level performance on the task. We also note that, because of the way we have constructed LAMBADA, standard language models are bound to fail on it by design: one of our first filters (see Section 3.1) was to choose passages where a number of simple language models were failing to predict the upcoming word. However, future research should find ways around this inherent difficulty. After all, humans were still able to solve this task, so a model that claims to have good language understanding ability should be able to succeed on it as well. 5 Conclusion This paper introduced the new LAMBADA dataset, aimed at testing language models on their ability to take a broad discourse context into account when predicting a word. A number of linguistic phenomena make the target words in LAMBADA easy to guess by human subjects when they can look at the whole passages they come from, but nearly impossible if only the last sentence is considered. Our preliminary experiments suggest that even some cutting-edge neural network approaches that are in principle able to track long-distance effects are far from passing the LAMBADA challenge. We hope the computational community will be stimulated to develop novel language models that are genuinely capturing the non-local phenomena that LAMBADA reflects. To promote research in this direction, we plan to announce a public competition based on the LAMBADA data.5 Our own hunch is that, despite the initially disappointing results of the “vanilla” Memory Network we tested, the ability to store information in a longer-term memory will be a crucial component of successful models, coupled with the ability to perform some kind of reasoning about what’s 5The development set of LAMBADA, along with the training corpus, can be downloaded at http://clic. cimec.unitn.it/lambada/. The test set will be made available at the time of the competition. stored in memory, in order to retrieve the right information from it. On a more general note, we believe that leveraging human performance on word prediction is a very promising strategy to construct benchmarks for computational models that are supposed to capture various aspects of human text understanding. The influence of broad context as explored by LAMBADA is only one example of this idea. Acknowledgments We are grateful to Aurelie Herbelot, Tal Linzen, Nghia The Pham and, especially, Roberto Zamparelli for ideas and feedback. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 655577 (LOVe); ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES); NWO VIDI grant n. 276-89-008 (Asymmetry in Conversation). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In ICLR. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of EMNLP, pages 632–642, Lisbon, Portugal. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of NIPS, Montreal, Canada. Published online: https://papers.nips.cc/book/ advances-in-neural-informationprocessing-systems-28-2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks principle: Reading children’s books with explicit memory representations. In Proceedings of ICLR Conference Track, San Juan, Puerto Rico. Published online: http://www.iclr.cc/doku.php?id= iclr2016:main. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–178–. 1533 Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document context language models. http://arxiv.org/abs/ 1511.03962. Tomas Mikolov, Stefan Kombrink, Anoop Deoras, Lukar Burget, and Jan Honza Cernocky. 2011. Rnnlm - recurrent neural network language. In Proceedings of ASRU. IEEE Automatic Speech Recognition and Understanding Workshop. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. http://arxiv.org/ abs/1301.3781/. Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc’Aurelio Ranzato. 2015. Learning longer memory in recurrent neural networks. In Proceedings of ICLR Workshop Track, San Diego, CA. Published online: http://www. iclr.cc/doku.php?id=iclr2015:main. Tomas Mikolov. 2014. Using neural networks for modelling and representing natural languages. Slides presented at COLING, online at http://www.coling2014.org/COLING\2014\Tutorialfix\-\TomasMikolov.pdf. Matthew Richardson, Christopher Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP, pages 193–203, Seattle, WA. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of ICLR Conference Track, San Juan, Puerto Rico. Published online: http://www. iclr.cc/doku.php?id=iclr2016:main. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of NAACL, pages 196–205, Denver, CO. Andreas Stolcke. 2002. Srilm-an extensible language modeling toolkit. In INTERSPEECH, volume 2002, page 2002. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. http://arxiv.org/abs/1503. 08895. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proceedings of the ICML Deep Learning Workshop, Lille, France. Published online: https://sites.google.com/site/ deeplearning2015/accepted-papers. Tian Wang and Kyunghyun Cho. 2015. Largercontext language modelling. http://arxiv. org/abs/1511.03729. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards AI-complete question answering: A set of prerequisite toy tasks. http://arxiv.org/abs/1502.05698. Wenpeng Yin and Hinrich Sch¨utze. 2015. MultiGranCNN: An architecture for general matching of text chunks on multiple levels of granularity. In Proceedings of ACL, pages 63–73, Beijing, China. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV 2015, pages 19–27. Geoffrey Zweig and Christopher Burges. 2011. The Microsoft Research sentence completion challenge. Technical Report MSR-TR-2011-129, Microsoft Research. 1534
2016
144
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1535–1545, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics WIKIREADING: A Novel Large-scale Language Understanding Task over Wikipedia Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey and David Berthelot Google Research {dhewlett,allac,llion,ipolosukhin,fto,hanjay,matkelcey,dberth}@google.com Abstract We present WIKIREADING, a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs). We compare various state-of-the-art DNNbased architectures for document classification, information extraction, and question answering. We find that models supporting a rich answer space, such as word or character sequences, perform best. Our best-performing model, a word-level sequence to sequence model with a mechanism to copy out-of-vocabulary words, obtains an accuracy of 71.8%. 1 Introduction A growing amount of research in natural language understanding (NLU) explores end-to-end deep neural network (DNN) architectures for tasks such as text classification (Zhang et al., 2015), relation extraction (Nguyen and Grishman, 2015), and question answering (Weston et al., 2015). These models offer the potential to remove the intermediate steps traditionally involved in processing natural language data by operating on increasingly raw forms of text input, even unprocessed character or byte sequences. Furthermore, while these tasks are often studied in isolation, DNNs have the potential to combine multiple forms of reasoning within a single model. Supervised training of DNNs often requires a large amount of high-quality training data. To this end, we introduce a novel prediction task and accompanying large-scale dataset with a range of sub-tasks combining text classification and information extraction. The dataset is made publiclyavailable at http://goo.gl/wikireading. The task, which we call WIKIREADING, is to predict textual values from the open knowledge base Wikidata (Vrandeˇci´c and Kr¨otzsch, 2014) given text from the corresponding articles on Wikipedia (Ayers et al., 2008). Example instances are shown in Table 1, illustrating the variety of subject matter and sub-tasks. The dataset contains 18.58M instances across 884 sub-tasks, split roughly evenly between classification and extraction (see Section 2 for more details). In addition to its diversity, the WIKIREADING dataset is also at least an order of magnitude larger than related NLU datasets. Many natural language datasets for question answering (QA), such as WIKIQA (Yang et al., 2015), have only thousands of examples and are thus too small for training end-to-end models. Hermann et al. (2015) proposed a task similar to QA, predicting entities in news summaries from the text of the original news articles, and generated a NEWS dataset with 1M instances. The bAbI dataset (Weston et al., 2015) requires multiple forms of reasoning, but is composed of synthetically generated documents. WIKIQA and NEWS only involve pointing to locations within the document, and text classification datasets often have small numbers of output classes. In contrast, WIKIREADING has a rich output space of millions of answers, making it a challenging benchmark for state-of-the-art DNN architectures for QA or text classification. We implemented a large suite of recent models, and for the first time evaluate them on common grounds, placing the complexity of the task in context and illustrating the tradeoffs inherent in each 1535 Categorization Extraction Document Folkart Towers are twin skyscrapers in the Bayrakli district of the Turkish city of Izmir. Reaching a structural height of 200 m (656 ft) above ground level, they are the tallest ... Angeles blancos is a Mexican telenovela produced by Carlos Sotomayor for Televisa in 1990. Jacqueline Andere, Rogelio Guerra and Alfonso Iturralde star as the main . . . Canada is a country in the northern part of North America. Its ten provinces and three territories extend from the Atlantic to the Pacific and northward into the Arctic Ocean, . . . Breaking Bad is an American crime drama television series created and produced by Vince Gilligan. The show originally aired on the AMC network for five seasons, from January 20, 2008, to . . . Property country original language of work located next to body of water start time Answer Turkey Spanish Atlantic Ocean, Arctic Ocean, Pacific Ocean 20 January 2008 Table 1: Examples instances from WIKIREADING. The task is to predict the answer given the document and property. Answer tokens that can be extracted are shown in bold, the remaining instances require classification or another form of inference. approach. The highest score of 71.8% is achieved by a sequence to sequence model (Kalchbrenner and Blunsom, 2013; Cho et al., 2014) operating on word-level input and output sequences, with special handing for out-of-vocabulary words. 2 WIKIREADING We now provide background information relating to Wikidata, followed by a detailed description of the WIKIREADING prediction task and dataset. 2.1 Wikidata Wikidata is a free collaborative knowledge base containing information about approximately 16M items (Vrandeˇci´c and Kr¨otzsch, 2014). Knowledge related to each item is expressed in a set of statements, each consisting of a (property, value) tuple. For example, the item Paris might have associated statements asserting (instance of, city) or (country, France). Wikidata contains over 80M such statements across 884 properties. Items may be linked to articles on Wikipedia. 2.2 Dataset We constructed the WIKIREADING dataset from Wikidata and Wikipedia as follows: We consolidated all Wikidata statements with the same item and property into a single (item, property, answer) triple, where answer is a set of values. Replacing each item with the text of the linked Wikipedia article (discarding unlinked items) yields a dataset of 18.58M (document, property, answer) instances. Importantly, all elements in each instance are human-readable strings, making the task entirely textual. The only modification we made to these strings was to convert timestamps into a human-readable format (e.g., “4 July 1776”). The WIKIREADING task, then, is to predict the answer string for each tuple given the document and property strings. This setup can be seen as similar to information extraction, or question answering where the property acts as a “question”. We assigned all instances for each document randomly to either training (12.97M instances), validation (1.88M), and test (3.73M ) sets following a 70/10/20 distribution. This ensures that, during validation and testing, all documents are unseen. 2.3 Documents The dataset contains 4.7M unique Wikipedia articles, meaning that roughly 80% of the Englishlanguage Wikipedia is represented. Multiple instances can share the same document, with a mean of 5.31 instances per article (median: 4, max: 879). The most common categories of documents are human, taxon, film, album, and human settlement, making up 48.8% of the documents and 9.1% of the instances. The mean and median document lengths are 489.2 and 203 words. 2.4 Properties The dataset contains 884 unique properties, though the distribution of properties across instances is highly skewed: The top 20 properties cover 75% of the dataset, with 99% coverage achieved after 180 properties. We divide the properties broadly into two groups: Categorical properties, such as instance of, gender and country, require selecting between a relatively small number of possible answers, while relational properties, such as date of birth, 1536 Property Frequency Entropy instance of 2,574,038 0.431 sex or gender 941,200 0.189 country 803,252 0.536 date of birth 785,049 0.936 given name 767,916 0.763 occupation 716,176 0.589 country of citizenship 674,560 0.501 located in . . . entity 478,372 0.802 place of birth 384,951 0.800 date of death 364,910 0.943 Table 2: Training set frequency and scaled answer entropy for the 10 most frequent properties. parent, and capital, typically require extracting rare or totally unique answers from the document. To quantify this difference, we compute the entropy of the answer distribution A for each property p, scaled to the [0, 1] range by dividing by the entropy of a uniform distribution with the same number of values, i.e., ˆH(p) = H(Ap)/ log |Ap|. Properties that represent essentially one-to-one mappings score near 1.0, while a property with just a single answer would score 0.0. Table 2 lists entropy values for a subset of properties, showing that the dataset contains a spectrum of sub-tasks. We label properties with an entropy less than 0.7 as categorical, and those with a higher entropy as relational. Categorical properties cover 56.7% of the instances in the dataset, with the remaining 43.3% being relational. 2.5 Answers The distribution of properties described above has implications for the answer distribution. There are a relatively small number of very high frequency “head” answers, mostly for categorical properties, and a vast number of very low frequency “tail” answers, such as names and dates. At the extremes, the most frequent answer human accounts for almost 7% of the dataset, while 54.7% of the answers in the dataset are unique. There are some special categories of answers which are systematically related, in particular dates, which comprise 8.9% of the dataset (with 7.2% being unique). This distribution means that methods focused on either head or tail answers can each perform moderately well, but only a method that handles both types of answers can achieve maximum performance. Another consequence of the long tail of answers is that many (30.0%) of the answers in the test set never appear in the training set, meaning they must be read out of the document. An answer is present verbatim in the document for 45.6% of the instances. 3 Methods Recently, neural network architectures for NLU have been shown to meet or exceed the performance of traditional methods (Zhang et al., 2015; Dai and Le, 2015). The move to deep neural networks also allows for new ways of combining the property and document, inspired by recent research in the field of question answering (with the property serving as a question). In sequential models such as Recurrent Neural Networks (RNNs), the question could be prepended to the document, allowing the model to “read” the document differently for each question (Hermann et al., 2015). Alternatively, the question could be used to compute a form of attention (Bahdanau et al., 2014) over the document, to effectively focus the model on the most predictive words or phrases (Sukhbaatar et al., 2015; Hermann et al., 2015). As this is currently an ongoing field of research, we implemented a range of recent models and for the first time compare them on common grounds. We now describe these methods, grouping them into broad categories by general approach and noting necessary modifications. Later, we introduce some novel variations of these models. 3.1 Answer Classification Perhaps the most straightforward approach to WIKIREADING is to consider it as a special case of document classification. To fit WIKIREADING into this framework, we consider each possible answer as a class label, and incorporate features based on the property so that the model can make different predictions for the same document. While the number of potential answers is too large to be practical (and unbounded in principle), a substantial portion of the dataset can be covered by a model with a tractable number of answers. 3.1.1 Baseline The most common approach to document classification is to fit a linear model (e.g., Logistic Regression) over bag of words (BoW) features. To serve as a baseline for our task, the linear model needs to make different predictions for the same Wikipedia article depending on the property. We enable this behavior by computing two Nw element BoW vectors, one each for the document 1537 and property, and concatenating them into a single 2Nw feature vector. 3.1.2 Neural Network Methods All of the methods described in this section encode the property and document into a joint representation y ∈Rdout, which serves as input for a final softmax layer computing a probability distribution over the top Nans answers. Namely, for each answer i ∈{1, . . . , Nans}, we have: P(i|x) = ey⊤ai/ PNans j=1 ey⊤aj, (1) where ai ∈Rdout corresponds to a learned vector associated with answer i. Thus, these models differ primarily in how they combine the property and document to produce the joint representation. For existing models from the literature, we provide a brief description and note any important differences in our implementation, but refer the reader to the original papers for further details. Except for character-level models, documents and properties are tokenized into words. The Nw most frequent words are mapped to a vector in Rdin using a learned embedding matrix1. Other words are all mapped to a special out of vocabulary (OOV) token, which also has a learned embedding. din and dout are hyperparameters for these models. Averaged Embeddings (BoW): This is the neural network version of the baseline method described in Section 3.1.1. Embeddings for words in the document and property are separately averaged. The concatenation of the resulting vectors forms the joint representation of size 2din. Paragraph Vector: We explore a variant of the previous model where the document is encoded as a paragraph vector (Le and Mikolov, 2014). We apply the PV-DBOW variant that learns an embedding for a document by optimizing the prediction of its constituent words. These unsupervised document embeddings are treated as a fixed input to the supervised classifier, with no fine-tuning. LSTM Reader: This model is a simplified version of the Deep LSTM Reader proposed by Hermann et al. (2015). In this model, an LSTM (Hochreiter and Schmidhuber, 1997) reads the property and document sequences word-by-word 1Limited experimentation with initialization from publicly-available word2vec embeddings (Mikolov et al., 2013) yielded no improvement in performance. and the final state is used as the joint representation. This is the simplest model that respects the order of the words in the document. In our implementation we use a single layer instead of two and a larger hidden size. More details on the architecture can be found in Section 4.1 and in Table 4. Attentive Reader: This model, also presented in Hermann et al. (2015), uses an attention mechanism to better focus on the relevant part of the document for a given property. Specifically, Attentive Reader first generates a representation u of the property using the final state of an LSTM while a second LSTM is used to read the document and generate a representation zt for each word. Then, conditioned on the property encoding u, a normalized attention is computed over the document to produce a weighted average of the word representations zt, which is then used to generate the joint representation y. More precisely: mt = tanh(W1 concat(zt, u)) αt = exp (v⊺mt) r = P t αt P τ ατ zt y = tanh(W2 concat(r, u)), where W1, W2, and v are learned parameters. Memory Network: Our implementation closely follows the End-to-End Memory Network proposed in Sukhbaatar et al. (2015). This model maps a property p and a list of sentences x1, . . . , xn to a joint representation y by attending over sentences in the document as follows: The input encoder I converts a sequence of words xi = (xi1, . . . , xiLi) into a vector using an embedding matrix (equation 2), where Li is the length of sentence i.2 The property is encoded with the embedding matrix U (eqn. 3). Each sentence is encoded into two vectors, a memory vector (eqn. 4) and an output vector (eqn. 5), with embedding matrices M and C, respectively. The property encoding is used to compute a normalized attention vector over the memories (eqn. 6).3 The joint representation is the sum of the output vectors weighted 2Our final results use the position encoding method proposed by Sukhbaatar et al. (2015), which incorporates positional information in addition to word embeddings. 3Instead of the linearization method of Sukhbaatar et al. (2015), we applied an entropy regularizer for the softmax attention as described in Kurach et al. (2015). 1538 Ada , daughter of Lord Byron parent <sep> 0 0 0 0 1 1 0 0 Lord Byron <go> <end> Byron Lord (a) RNN Labeler: (b) Basic seq2seq: (c) Seq2seq with Placeholders: Ada , daughter of Lord Byron parent <sep> Lord PH_7 <go> <end> PH_7 Lord PH_3 , daughter of Lord PH_7 parent <sep> Figure 1: Illustration of RNN models. Blocks with same color share parameters. Red words are out of vocabulary and all share a common embedding. by this attention (eqn. 7). I(xi, W) = P j Wxij (2) u = I(p, U) (3) mi = I(xi, M) (4) ci = I(xi, C) (5) pi = softmax(q⊺mi) (6) y = u + P i pici (7) 3.2 Answer Extraction Relational properties involve mappings between arbitrary entities (e.g., date of birth, mother, and author) and thus are less amenable to document classification. For these, approaches from information extraction (especially relation extraction) are much more appropriate. In general, these methods seek to identify a word or phrase in the text that stands in a particular relation to a (possibly implicit) subject. Section 5 contains a discussion of prior work applying NLP techniques involving entity recognition and syntactic parsing to this problem. RNNs provide a natural fit for extraction, as they can predict a value at every position in a sequence, conditioned on the entire previous sequence. The most straightforward application to WIKIREADING is to predict the probability that a word at a given location is part of an answer. We test this approach using an RNN that operates on the sequence of words. At each time step, we use a sigmoid activation for estimating whether the current word is part of the answer or not. We refer to this model as the RNN Labeler and present it graphically in Figure 1a. For training, we label all locations where any answer appears in the document with a 1, and other positions with a 0 (similar to distant supervision (Mintz et al., 2009)). For multi-word answers, the word sequences in the document and answer must fully match4. Instances where no answer appears in the document are discarded for training. The cost function is the average crossentropy for the outputs across the sequence. When performing inference on the test set, sequences of consecutive locations scoring above a threshold are chunked together as a single answer, and the top-scoring answer is recorded for submission.5 3.3 Sequence to Sequence Recently, sequence to sequence learning (or seq2seq) has shown promise for natural language tasks, especially machine translation (Cho et al., 2014). These models combine two RNNs: an encoder, which transforms the input sequence into a vector representation, and a decoder, which converts the encoder vector into a sequence of output tokens, one token at a time. This makes them capable, in principle, of approximating any function mapping sequential inputs to sequential outputs. Importantly, they are the first model we consider that can perform any combination of answer classification and extraction. 3.3.1 Basic seq2seq This model resembles LSTM Reader augmented with a second RNN to decode the answer as a sequence of words. The embedding matrix is shared across the two RNNs but their state to state transition matrices are different (Figure 1b). This method extends the set of possible answers to any sequence of words from the document vocabulary. 3.3.2 Placeholder seq2seq While Basic seq2seq already expands the expressiveness of LSTM Reader, it still has a limited vocabulary and thus is unable to generate some answers. As mentioned in Section 3.2, RNN Labeler can extract any sequence of words present in the document, even if some are OOV. We extend the basic seq2seq model to handle OOV words by adding placeholders to our vocabulary, increasing the vocabulary size from Nw to Nw +Ndoc. Then, when an OOV word occurs in the document, it is replaced at random (without replacement). by one of these placeholders. We also replace the corresponding OOV words in the target output se4Dates were matched semantically to increase recall. 5We chose an arbitrary threshold of 0.5 for chunking. The score of each chunk is obtained from the harmonic mean of the predicted probabilities of its elements. 1539 L o <go> r A d a , . o L p a r … … d a u e n t n <end> … Figure 2: Character seq2seq model. Blocks with the same color share parameters. The same example as in Figure 1 is fed character by character. quence by the same placeholder,6 as shown in Figure 1c. Luong et al. (2015) developed a similar procedure for dealing with rare words in machine translation, copying their locations into the output sequence for further processing. This makes the input and output sequences a mixture of known words and placeholders, and allows the model to produce any answer the RNN Labeler can produce, in addition to the ones that the basic seq2seq model could already produce. This approach is comparable to entity anonymization used in Hermann et al. (2015), which replaces named entities with random ids, but simpler because we use word-level placeholders without entity recognition. 3.3.3 Basic Character seq2seq Another way of handling rare words is to process the input and output text as sequences of characters or bytes. RNNs have shown some promise working with character-level input, including state-of-the-art performance on a Wikipedia text classification benchmark (Dai and Le, 2015). A model that outputs answers character by character can in principle generate any of the answers in the test set, a major advantage for WIKIREADING. This model, shown in Figure 2, operates only on sequences of mixed-case characters. The property encoder RNN transforms the property, as a character sequence, into a fixed-length vector. This property encoding becomes the initial hidden state for the second layer of a two-layer document encoder RNN, which reads the document, again, character by character. Finally, the answer decoder RNN uses the final state of the previous RNN to decode the character sequence for the answer. 6The same OOV word may occur several times in the document. Our simplified approach will attribute a different placeholder for each of these and will use the first occurrence for the target answer. 3.3.4 Character seq2seq with Pretraining Unfortunately, at the character level the length of all sequences (documents, properties, and answers) is greatly increased. This adds more sequential steps to the RNN, requiring gradients to propagate further, and increasing the chance of an error during decoding. To address this issue in a classification context, Dai and Le (2015) showed that initializing an LSTM classifier with weights from a language model (LM) improved its accuracy. Inspired by this result, we apply this principle to the character seq2seq model with a twophase training process: In the first phase, we train a character-level LM on the input character sequences from the WIKIREADING training set (no new data is introduced). In the second phase, the weights from this LM are used to initialize the first layer of the encoder and the decoder (purple and green blocks in Figure 2). After initialization, training proceeds as in the basic character seq2seq model. 4 Experiments We evaluated all methods from Section 3 on the full test set with a single scoring framework. An answer is correct when there is an exact string match between the predicted answer and the gold answer. However, as describe in Section 2.2, some answers are composed from a set of values (e.g. third example in Table 1). To handle this, we define the Mean F1 score as follows: For each instance, we compute the F1-score (harmonic mean of precision and recall) as a measure of the degree of overlap between the predicted answer set and the gold set for a given instance. The resulting perinstance F1 scores are then averaged to produce a single dataset-level score. This allows a method to obtain partial credit for an instance when it answers with at least one value from the golden set. In this paper, we only consider methods for answering with a single value, and most answers in the dataset are also composed of a single value, so this Mean F1 metric is closely related to accuracy. More precisely, a method using a single value as answer is bounded by a Mean F1 of 0.963. 4.1 Training Details We implemented all models in a single framework based on TensorFlow (Abadi et al., 2015) with shared pre-processing and comparable hyperparameters whenever possible. All documents are 1540 Method Mean F1 Bound Categorical Relational Date Params Answer Classifier Sparse BoW Baseline 0.438 0.831 0.725 0.063 0.004 500.5M Averaged Embeddings 0.583 0.849 0.234 0.080 120M Paragraph Vector 0.552 0.787 0.227 0.033 30M LSTM Reader 0.680 0.880 0.421 0.311 45M Attentive Reader 0.693 0.886 0.441 0.337 56M Memory Network 0.612 0.861 0.288 0.055 90.1M Answer Extraction RNN Labeler 0.357 0.471 0.240 0.536 0.626 41M Sequence to Sequence Basic seq2seq 0.708 0.925 0.844 0.530 0.738 32M Placeholder seq2seq 0.718 0.948 0.835 0.565 0.730 32M Character seq2seq 0.677 0.963 0.841 0.462 0.731 4.1M Character seq2seq (LM) 0.699 0.963 0.851 0.501 0.733 4.1M Table 3: Results for all methods described in Section 3 on the test set. F1 is the Mean F1 score described in 4. Bound is the upper bound on Mean F1 imposed by constraints in the method (see text for details). The remaining columns provide score breakdowns by property type and the number of model parameters. truncated to the first 300 words except for Character seq2seq, which uses 400 characters. The embedding matrix used to encode words in the document uses din = 300 dimensions for the Nw = 100, 000 most frequent words. Similarly, answer classification over the Nans = 50, 000 most frequent answers is performed using an answer representation of size dout = 300.7 The first 10 words of the properties are embedded using the document embedding matrix. Following Cho et al. (2014), RNNs in seq2seq models use a GRU cell with a hidden state size of 1024. More details on parameters are reported in Table 4. Method Emb. Dims Doc. Length Property Length Doc. Vocab. Size Sparse BoW Baseline N/A 300 words 10 words 50K words Paragraph Vector N/A N/A 10 words N/A Character seq2seq 30 400 chars 20 chars 76 chars All others 300 300 words 10 words 100K words Table 4: Structural model parameters. Note that the Paragraph Vector method uses the output from a separate, unsupervised model as a document encoding, which is not counted in these parameters. Optimization was performed with the Adam stochastic optimizer8 (Kingma and Adam, 2015) over mini-batches of 128 samples. Gradient clipping 9 (Graves, 2013) is used to prevent instability in training RNNs. We performed a search over 7For models like Averaged Embedding and Paragraph Vector, the concatenation imposes a greater dout. 8Using β1 = 0.9, β2 = 0.999 and ϵ = 10−8. 9When the norm of gradient g exceeds a threshold C, it is 50 randomly-sampled hyperparameter configurations for the learning rate and gradient clip threshold, selecting the one with the highest Mean F1 on the validation set. Learning rate and clipping threshold are sampled uniformly, on a logarithmic scale, over the range [10−5, 10−2] and [10−3, 101] respectively. 4.2 Results and Discussion Results for all models on the held-out set of test instances are presented in Table 3. In addition to the overall Mean F1 scores, the model families differ significantly in Mean F1 upper bound, and their relative performance on the relational and categorical properties defined in Section 2.4. We also report scores for properties containing dates, a subset of relational properties, as a separate column since they have a distinct format and organization. For examples of model performance on individual properties, see Table 5. As expected, all classifier models perform well for categorical properties, with more sophisticated classifiers generally outperforming simpler ones. The difference in precision reading ability between models that use broad document statistics, like Averaged Embeddings and Paragraph Vectors, and the RNN-based classifiers is revealed in the scores for relational and especially date properties. As shown in Table 5, this difference is magnified in situations that are more difficult for a classifier, such as relational properties or properties with fewer training examples, where Attentive Reader outperforms Averaged Embeddings by a wide margin. This model family also has a high scaled down i.e. g ←g · min  1, C ||g||  . 1541 Mean F1 Property Test Instances Averaged Embeddings Attentive Reader Memory Network Basic seq2seq Placeholder seq2seq Character seq2seq Character seq2seq (LM) Categorical Properties instance of 734187 0.8545 0.8978 0.8720 0.8877 0.8775 0.8548 0.8659 sex or gender 267896 0.9917 0.9966 0.9936 0.9968 0.9952 0.9943 0.9941 genre 32531 0.5320 0.6225 0.5625 0.5511 0.5260 0.5096 0.5283 instrument 3665 0.7621 0.8415 0.7886 0.8377 0.8172 0.7529 0.7832 Relational Properties given name 218625 0.4973 0.8486 0.7206 0.8669 0.8868 0.8606 0.8729 located in 137253 0.4140 0.6195 0.4832 0.5484 0.6978 0.5496 0.6365 parent taxon 62685 0.1990 0.3467 0.2077 0.2044 0.7997 0.4979 0.5748 author 9517 0.0309 0.2088 0.1050 0.6094 0.6572 0.1403 0.3748 Date Properties date of birth 223864 0.0626 0.3677 0.0016 0.8306 0.8259 0.8294 0.8303 date of death 103507 0.0417 0.2949 0.0506 0.7974 0.7874 0.7897 0.7924 publication date 31253 0.3909 0.5549 0.4851 0.5988 0.5902 0.5903 0.5943 date of official opening 1119 0.1510 0.3047 0.1725 0.3333 0.3012 0.1457 0.1635 Table 5: Property-level Mean F1 scores on the test set for selected methods and properties. For each property type, the two most frequent properties are shown followed by two less frequent properties to illustrate long-tail behavior. Figure 3: Per-answer Mean F1 scores for Attentive Reader (moving average of 1000), illustrating the decline in prediction quality as the number of training examples per answer decreases. upper bound, as perfect classification across the 50, 000 most frequent answers would yield a Mean F1 of 0.831. However, none of them approaches this limit. Part of the reason is that their accuracy for a given answer decreases quickly as the frequency of the answer in the training set decreases, as illustrated in Figure 3. As these models have to learn a separate weight vector for each answer as part of the softmax layer (see Section 3.1), this may suggest that they fail to generalize across answers effectively and thus require significant number of training examples per answer. The only answer extraction model evaluated, RNN Labeler, shows a complementary set of strengths, performing better on relational properties than categorical ones. While the Mean F1 upper bound for this model is just 0.434 because it can only produce answers that are present verbatim in the document text, it manages to achieve most of this potential. The improvement on date properties over the classifier models demonstrates its ability to identify answers that are typically present in the document. We suspect that answer extraction may be simpler than answer classification because the model can learn robust patterns that indicate a location without needing to learn about each answer, as the classifier models must. The sequence to sequence models show a greater degree of balance between relational and categorical properties, reaching performance consistent with classifiers on the categorical questions and with RNN Labeler on relational questions. Placeholder seq2seq can in principle produce any answer that RNN Labeler can, and the performance on relational properties is indeed similar. As shown in Table 5, Placeholder seq2seq performs especially well for properties where the answer typically contains rare words such as the name of a place or person. When the set of possible answer tokens is more constrained, such 1542 as in categorical or date properties, the Basic seq2seq often performs slightly better. Character seq2seq has the highest upper bound, limited to 0.963 only because it cannot produce an answer set with multiple elements. LM pretraining consistently improves the performance of the Character seq2seq model, especially for relational properties as shown in Table 5. The performance of the Character seq2seq, especially with LM pretraining, is a surprising result: It performs comparably to the word-level seq2seq models even though it must copy long character strings when doing extraction and has access to a smaller portion of the document. We found the character based models to be particularly sensitive to hyperparameters. However, using a pretrained language model reduced this issue and significantly accelerated training while improving the final score. We believe that further research on pretraining for character based models could improve this result. 5 Related Work The goal of automatically extracting structured information from unstructured Wikipedia text was first advanced by Wu and Weld (2007). As Wikidata did not exist at that time, the authors relied on the structured infoboxes included in some Wikipedia articles for a relational representation of Wikipedia content. Wikidata is a cleaner data source, as the infobox data contains many slight variations in schema related to page formatting. Partially to get around this issue, the authors restrict their prediction model Kylin to 4 specific infobox classes, and only common attributes within each class. A substantial body of work in relation extraction (RE) follows the distant supervision paradigm (Craven and Kumlien, 1999), where sentences containing both arguments of a knowledge base (KB) triple are assumed to express the triple’s relation. Broadly, these models use these distant labels to identify syntactic features relating the subject and object entities in text that are indicative of the relation. Mintz et al. (2009) apply distant supervision to extracting Freebase triples (Bollacker et al., 2008) from Wikipedia text, analogous to the relational part of WIKIREADING. Extensions to distant supervision include explicitly modelling whether the relation is actually expressed in the sentence (Riedel et al., 2010), and jointly reasoning over larger sets of sentences and relations (Surdeanu et al., 2012). Recently, Rockt¨aschel et al. (2015) developed methods for reducing the number of distant supervision examples required by sharing information between relations. 6 Conclusion We have demonstrated the complexity of the WIKIREADING task and its suitability as a benchmark to guide future development of DNN models for natural language understanding. After comparing a diverse array of models spanning classification and extraction, we conclude that end-to-end sequence to sequence models are the most promising. These models simultaneously learned to classify documents and copy arbitrary strings from them. In light of this finding, we suggest some focus areas for future research. Our character-level model improved substantially after language model pretraining, suggesting that further training optimizations may yield continued gains. Document length poses a problem for RNN-based models, which might be addressed with convolutional neural networks that are easier to parallelize. Finally, we note that these models are not intrinsically limited to English, as they rely on little or no pre-processing with traditional NLP systems. This means that they should generalize effectively to other languages, which could be demonstrated by a multilingual version of WIKIREADING. Acknowledgments We thank Jonathan Berant for many helpful comments on early drafts of the paper, and Catherine Finegan-Dollak for an early implementation of RNN Labeler. References Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2015. Tensorflow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow. org. Phoebe Ayers, Charles Matthews, and Ben Yates. 2008. How Wikipedia works: And how you can be a part of it. No Starch Press. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR). 1543 Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD ’08, pages 1247–1250, New York, NY, USA. ACM. Kyunghyun Cho, Bart Van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar, October. Association for Computational Linguistics. Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, pages 77–86. AAAI Press. Andrew M Dai and Quoc V Le. 2015. Semisupervised sequence learning. In Advances in Neural Information Processing Systems, pages 3061– 3069. Alex Graves. 2013. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684– 1692. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In Proceedings of the CVSC Workshop, Sofia, Bulgaria. Association of Computational Linguistics. Diederik P Kingma and Jimmy Ba Adam. 2015. A method for stochastic optimization. In International Conference on Learning Representation. Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. 2015. Neural random-access machines. In International Conference on Learning Representations (ICLR). Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of The 31st International Conference on Machine Learning, pp. , 2014, pages 1188?–1196. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11–19, Beijing, China, July. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2, ACL ’09, pages 1003–1011, Stroudsburg, PA, USA. Association for Computational Linguistics. Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In Proceedings of NAACL-HLT, pages 39–48. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer. Tim Rockt¨aschel, Sameer Singh, and Sebastian Riedel. 2015. Injecting Logical Background Knowledge into Embeddings for Relation Extraction. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 455– 465. Association for Computational Linguistics. Denny Vrandeˇci´c and Markus Kr¨otzsch. 2014. Wikidata: A free collaborative knowledgebase. Commun. ACM, 57:78–85. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. May. 1544 Fei Wu and Daniel S Weld. 2007. Autonomously semantifying wikipedia. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 41–50. ACM. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013–2018. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, pages 649–657. 1545
2016
145
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1546–1556, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Optimizing Spectral Learning for Parsing Shashi Narayan and Shay B. Cohen School of Informatics University of Edinburgh Edinburgh, EH8 9LE, UK {snaraya2,scohen}@inf.ed.ac.uk Abstract We describe a search algorithm for optimizing the number of latent states when estimating latent-variable PCFGs with spectral methods. Our results show that contrary to the common belief that the number of latent states for each nonterminal in an L-PCFG can be decided in isolation with spectral methods, parsing results significantly improve if the number of latent states for each nonterminal is globally optimized, while taking into account interactions between the different nonterminals. In addition, we contribute an empirical analysis of spectral algorithms on eight morphologically rich languages: Basque, French, German, Hebrew, Hungarian, Korean, Polish and Swedish. Our results show that our estimation consistently performs better or close to coarse-to-fine expectation-maximization techniques for these languages. 1 Introduction Latent-variable probabilistic context-free grammars (L-PCFGs) have been used in the natural language processing community (NLP) for syntactic parsing for over a decade. They were introduced in the NLP community by Matsuzaki et al. (2005) and Prescher (2005), with Matsuzaki et al. using the expectation-maximization (EM) algorithm to estimate them. Their performance on syntactic parsing of English at that stage lagged behind state-of-the-art parsers. Petrov et al. (2006) showed that one of the reasons that the EM algorithm does not estimate state-of-the-art parsing models for English is that the EM algorithm does not control well for the model size used in the parser – the number of latent states associated with the various nonterminals in the grammar. As such, they introduced a coarse-to-fine technique to estimate the grammar. It splits and merges nonterminals (with latent state information) with the aim to optimize the likelihood of the training data. Together with other types of fine tuning of the parsing model, this led to state-of-the-art results for English parsing. In more recent work, Cohen et al. (2012) described a different family of estimation algorithms for L-PCFGs. This so-called “spectral” family of learning algorithms is compelling because it offers a rigorous theoretical analysis of statistical convergence, and sidesteps local maxima issues that arise with the EM algorithm. While spectral algorithms for L-PCFGs are compelling from a theoretical perspective, they have been lagging behind in their empirical results on the problem of parsing. In this paper we show that one of the main reasons for that is that spectral algorithms require a more careful tuning procedure for the number of latent states than that which has been advocated for until now. In a sense, the relationship between our work and the work of Cohen et al. (2013) is analogous to the relationship between the work by Petrov et al. (2006) and the work by Matsuzaki et al. (2005): we suggest a technique for optimizing the number of latent states for spectral algorithms, and test it on eight languages. Our results show that when the number of latent states is optimized using our technique, the parsing models the spectral algorithms yield perform significantly better than the vanilla-estimated models, and for most of the languages – better than the Berkeley parser of Petrov et al. (2006). As such, the contributions of this parser are twofold: • We describe a search algorithm for optimiz1546 ing the number of latent states for spectral learning. • We describe an analysis of spectral algorithms on eight languages (until now the results of L-PCFG estimation with spectral algorithms for parsing were known only for English). Our parsing algorithm is rather language-generic, and does not require significant linguistically-oriented adjustments. In addition, we dispel the common wisdom that more data is needed with spectral algorithms. Our models yield high performance on treebanks of varying sizes from 5,000 sentences (Hebrew and Swedish) to 40,472 sentences (German). The rest of the paper is organized as follows. In §2 we describe notation and background. §3 further investigates the need for an optimization of the number of latent states in spectral learning and describes our optimization algorithm, a search algorithm akin to beam search. In §4 we describe our experiments with natural language parsing for Basque, French, German, Hebrew, Hungarian, Korean, Polish and Swedish. We conclude in §5. 2 Background and Notation We denote by [n] the set of integers {1, . . . , n}. An L-PCFG is a 5-tuple (N, I, P, f, n) where: • N is the set of nonterminal symbols in the grammar. I ⊂N is a finite set of interminals. P ⊂N is a finite set of preterminals. We assume that N = I ∪P, and I ∩P = ∅. Hence we have partitioned the set of nonterminals into two subsets. • f: N →N is a function that maps each nonterminal a to the number of latent states it uses. The set [ma] includes the possible hidden states for nonterminal a. • [n] is the set of possible words. • For all a ∈I, b ∈N, c ∈N, h1 ∈[ma], h2 ∈[mb], h3 ∈[mc], we have a binary context-free rule a(h1) →b(h2) c(h3). • For all a ∈P, h ∈[ma], x ∈[n], we have a lexical context-free rule a(h) →x. The estimation of an L-PCFG requires an assignment of probabilities (or weights) to each of the rules a(h1) →b(h2) c(h3) and a(h) →x, and also an assignment of starting probabilities for each a(h), where a ∈I and h ∈[ma]. Estimation is usually assumed to be done from a set of parse trees (a treebank), where the latent states are not included in the data – only the “skeletal” trees which consist of nonterminals in N. L-PCFGs, in their symbolic form, are related to regular tree grammars, an old grammar formalism, but they were introduced as statistical models for parsing with latent heads more recently by Matsuzaki et al. (2005) and Prescher (2005). Earlier work about L-PCFGs by Matsuzaki et al. (2005) used the expectation-maximization (EM) algorithm to estimate the grammar probabilities. Indeed, given that the latent states are not observed, EM is a good fit for L-PCFG estimation, since it aims to do learning from incomplete data. This work has been further extended by Petrov et al. (2006) to use EM in a coarse-to-fine fashion: merging and splitting nonterminals using the latent states to optimize the number of latent states for each nonterminal. Cohen et al. (2012) presented a so-called spectral algorithm to estimate L-PCFGs. This algorithm uses linear-algebraic procedures such as singular value decomposition (SVD) during learning. The spectral algorithm of Cohen et al. builds on an estimation algorithm for HMMs by Hsu et al. (2009).1 Cohen et al. (2013) experimented with this spectral algorithm for parsing English. A different variant of a spectral learning algorithm for L-PCFGs was developed by Cohen and Collins (2014). It breaks the problem of L-PCFG estimation into multiple convex optimization problems which are solved using EM. The family of L-PCFG spectral learning algorithms was further extended by Narayan and Cohen (2015). They presented a simplified version of the algorithm of Cohen et al. (2012) that estimates sparse grammars and assigns probabilities (instead of weights) to the rules in the grammar, and as such does not suffer from the problem of negative probabilities that arise with the original spectral algorithm (see discussion in Cohen et al., 2013). In this paper, we use the algorithms by Narayan and Cohen (2015) and Cohen 1A related algorithm for weighted tree automata (WTA) was developed by Bailly et al. (2010). However, the conversion from L-PCFGs to WTA is not straightforward, and information is lost in this conversion. See also (Rabusseau et al., 2016). 1547 VP V chased NP D the N cat S NP D the N mouse VP Figure 1: The inside tree (left) and outside tree (right) for the nonterminal VP in the parse tree (S (NP (D the) (N mouse)) (VP (V chased) (NP (D the) (N cat)))) for the sentence “the mouse chased the cat.” et al. (2012), and we compare them against stateof-the-art L-PCFG parsers such as the Berkeley parser (Petrov et al., 2006). We also compare our algorithms to other state-of-the-art parsers where elaborate linguistically-motivated feature specifications (Hall et al., 2014), annotations (Crabb´e, 2015) and formalism conversions (Fern´andezGonz´alez and Martins, 2015) are used. 3 Optimizing Spectral Estimation In this section, we describe our optimization algorithm and its motivation. 3.1 Spectral Learning of L-PCFGs and Model Size The family of spectral algorithms for latentvariable PCFGs rely on feature functions that are defined for inside and outside trees. Given a tree, the inside tree for a node contains the entire subtree below that node; the outside tree contains everything in the tree excluding the inside tree. Figure 1 shows an example of inside and outside trees for the nonterminal VP in the parse tree of the sentence “the mouse chased the cat”. With L-PCFGs, the model dictates that an inside tree and an outside tree that are connected at a node are statistically conditionally independent of each other given the node label and the latent state that is associated with it. As such, one can identify the distribution over the latent states for a given nonterminal a by using the cross-covariance matrix of the inside and the outside trees, Ωa. For more information on the definition of this crosscovariance matrix, see Cohen et al. (2012) and Narayan and Cohen (2015). The L-PCFG spectral algorithms use singular value decomposition (SVD) on Ωa to reduce the dimensionality of the feature functions. If Ωa is computed from the true L-PCFG distribution then the rank of Ωa (the number of non-zero singular values) gives the number of latent states according to the model. In the case of estimating Ωa from data generated from an L-PCFG, the number of latent states for each nonterminal can be exposed by capping it when the singular values of Ωa are smaller than some threshold value. This means that spectral algorithms give a natural way for the selection of the number of latent states for each nonterminal a in the grammar. However, when the data from which we estimate an L-PCFG model are not drawn from an LPCFG (the model is “incorrect”), the number of non-zero singular values (or the number of singular values which are large) is no longer sufficient to determine the number of latent states for each nonterminal. This is where our algorithm comes into play: it optimizes the number of latent search for each nonterminal by applying a search algorithm akin to beam search. 3.2 Optimizing the Number of Latent States As mentioned in the previous section, the number of non-zero singular values of Ωa gives a criterion to determine the number of latent states ma for a given nonterminal a. In practice, we cap ma not to include small singular values which are close to 0, because of estimation errors of Ωa. This procedure does not take into account the interactions that exist between choices of latent state numbers for the various nonterminals. In principle, given the independence assumptions that L-PCFGs make, choosing the nonterminals based only on the singular values is “statistically correct.” However, because in practice the modeling assumptions that we make (that natural language parse trees are drawn from an L-PCFG) do not hold, we can improve further the accuracy of the model by taking into account the nonterminal interaction. Another source of difficulty in choosing the number of latent states based the singular values of Ωa is sampling error: in practice, we are using data to estimate Ωa, and as such, even if the model is correct, the rank of the estimated matrix does not have to correspond to the rank of Ωa according to the true distribution. As a matter of fact, in addition to neglecting small singular values, the spectral methods of Cohen et al. (2013) and Narayan and Cohen (2015) also cap the number of latent states for each nonterminal to an up1548 Inputs: An input treebank divided into training and development set. A basic spectral estimation algorithm S with its default setting. An integer k denoting the size of the beam. An integer m denoting the upper bound on the number of latent states. Algorithm: (Step 0: Initialization) • Set Q, a queue of size k, to be empty. • Estimate an L-PCFG GS : (N, I, P, fS, n) using S. • Initialize f = fS, a function that maps each nonterminal a ∈N to the number of latent states. • Let L be a list of nonterminals (a1, . . . , aM) such that ai ∈N for which to optimize the number of latent states. • Let s be the F1 score for the above L-PCFG GS on the development set. • Put in Q the element (s, 1, f, coarse). • The queue is ordered by s, the first element of tuples, in the queue. (Step 1: Search, repeat until termination happens) • Dequeue the queue into (s, j, f, t) where j is the index in the input nonterminal list L. • If j = (M + 1), return f. • If t is coarse then for each m0 ∈{1, 5, 10, . . . , m}: • Let f0 be such that ∀a ̸= aj f0(a) = f(a) and f0(aj) = m0. • Train an L-PCFG G0 using S but with f0. • Let s0 be the F1 score for G0 on the development set. • Enqueue into Q: (s0, j, f0, refine). • If t is refine then for each m0 ∈{f(a) + ℓ| ℓ∈ {−4, −3, −2, −1, 0, 1, 2, 3, 4}}: • Let f0 be such that ∀a ̸= aj f0(a) = f(a) and f0(aj) = m0. • Train an L-PCFG G0 using S but with f0. • Let s0 be the F1 score for G0 on the development set. • Enqueue into Q: (s0, j + 1, f0, coarse). Figure 2: A search algorithm for finding the optimal number of latent states. per bound to keep the grammar size small. Petrov et al. (2006) improves over the estimation described in Matsuzaki et al. (2005) by taking into account the interactions between the nonterminals and their latent state numbers in the training data. They use the EM algorithm to split and merge nonterminals using the latent states, and optimize the number of latent states for each nonterminal such that it maximizes the likelihood of a training treebank. Their refined grammar successfully splits nonterminals to various degrees to capture their complexity. We take the analogous step with spectral methods. We propose an algorithm where we first compute Ωa on the training data and then we optimize the number of latent states for each nonterminal by optimizing the PARSEVAL metric (Black et al., 1991) on a development set. Our optimization algorithm appears in Figure 2. The input to the algorithm is training and development data in the form of parse trees, a basic spectral estimation algorithm S in its default setting, an upper bound m on the number of latent states that can be used for the different nonterminals and a beam size k which gives a maximal queue size for the beam. The algorithm aims to learn a function f that maps each nonterminal a to the number of latent states. It initializes f by estimating a default grammar GS : (N, I, P, fS, n) using S and setting f = fS. It then iterates over a ∈N, improving f such that it optimizes the PARSEVAL metric on the development set. The state of the algorithm includes a queue that consists of tuples of the form (s, j, f, t) where f is an assignment of latent state numbers to each nonterminal in the grammar, j is the index of a nonterminal to be explored in the input nonterminal list L, s is the F1 score on the development set for a grammar that is estimated with f and t is a tag that can either be coarse or refine. The algorithm orders these tuples by s in the queue, and iteratively dequeues elements from the queue. Then, depending on the label t, it either makes a refined search for the number of latent states for aj, or a more coarse search. As such, the algorithm can be seen as a variant of a beam search algorithm. The search algorithm can be used with any training algorithm for L-PCFGs, including the algorithms of Cohen et al. (2013) and Narayan and Cohen (2015). These methods, in their default setting, use a function fS which maps each nonterminal a to a fixed number of latent states ma it uses. In this case, S takes as input training data, in the form of a treebank, decomposes into inside and outside trees at each node in each tree in the training set; and reduces the dimensionality of the inside and outside feature functions by running 1549 lang. Basque French German-N German-T Hebrew Hungarian Korean Polish Swedish train sent. 7,577 14,759 18,602 40,472 5,000 8,146 23,010 6,578 5,000 tokens 96,565 443,113 328,531 719,532 128,065 170,221 301,800 66,814 76,332 lex. size 25,136 27,470 48,509 77,219 15,971 40,775 85,671 21,793 14,097 #nts 112 222 208 762 375 112 352 198 148 dev sent. 948 1,235 1,000 5,000 500 1,051 2,066 821 494 tokens 13,893 38,820 17,542 76,704 11,305 30,010 25,729 8,391 9,339 test sent. 946 2,541 1,000 5,000 716 1,009 2,287 822 666 tokens 11,477 75,216 17,585 92,004 17,002 19,913 28,783 8,336 10,675 Table 1: Statistics about the different datasets used in our experiments for the training (“train”), development (“dev”) and test (“test”) sets. “sent.” denotes the number of sentences in the dataset, “tokens” denotes the total number of words in the dataset, “lex. size” denotes the vocabulary size in the training set and “#nts” denotes the number of nonterminals in the training set after binarization. SVD on the cross-covariance matrix Ωa of the inside and the outside trees, for each nonterminal a. Cohen et al. (2013) estimate the parameters of the L-PCFG up to a linear transformation using f(a) non-zero singular values of Ωa, whereas Narayan and Cohen (2015) use the feature representations induced from the SVD step to cluster instances of nonterminal a in the training data into f(a) clusters; these clusters are then treated as latent states that are “observed.” Finally, Narayan and Cohen follow up with a simple frequency count maximum likelihood estimate to estimate the parameters in the L-PCFG with these latent states. An important point to make is that the learning algorithms of Narayan and Cohen (2015) and Cohen et al. (2013) are relatively fast,2 in comparison to the EM algorithm. They require only one iteration over the data. In addition, the SVD step of S for these learning algorithms is computed just once for a large m. The SVD of a lower rank can then be easily computed from that SVD. 4 Experiments In this section, we describe our setup for parsing experiments on a range of languages. 4.1 Experimental Setup Datasets We experiment with nine treebanks consisting of eight different morphologically rich languages: Basque, French, German, Hebrew, Hungarian, Korean, Polish and Swedish. Table 1 shows the statistics of 9 different treebanks with their splits into training, development and test sets. Eight out of the nine datasets (Basque, French, German-T, Hebrew, Hungarian, Korean, Polish 2It has been documented in several papers that the family of spectral estimation algorithms is faster than algorithms such as EM, not just for L-PCFGs. See, for example, Parikh et al. (2012). and Swedish) are taken from the workshop on Statistical Parsing of Morphologically Rich Languages (SPMRL; Seddah et al., 2013). The German corpus in the SPMRL workshop is taken from the TiGer corpus (German-T, Brants et al., 2004). We also experiment with another German corpus, the NEGRA corpus (German-N, Skut et al., 1997), in a standard evaluation split.3 Words in the SPMRL datasets are annotated with their morphological signatures, whereas the NEGRA corpus does not contain any morphological information. Data preprocessing and treatment of rare words We convert all trees in the treebanks to a binary form, train and run the parser in that form, and then transform back the trees when doing evaluation using the PARSEVAL metric. In addition, we collapse unary rules into unary chains, so that our trees are fully binarized. The column “#nts” in Table 1 shows the number of nonterminals after binarization in the various treebanks. Before binarization, we also drop all functional information from the nonterminals. We use fine tags for all languages except Korean. This is in line with Bj¨orkelund et al. (2013).4 For Korean, there are 2,825 binarized nonterminals making it impractical to use our optimization algorithm, so we use the coarse tags. Bj¨orkelund et al. (2013) have shown that the morphological signatures for rare words are useful to improve the performance of the Berkeley parser. 3We use the first 18,602 sentences as a training set, the next 1,000 sentences as a development set and the last 1,000 sentences as a test set. This corresponds to an 80%-10%-10% split of the treebank. 4In their experiments Bj¨orkelund et al. (2013) found that fine tags were not useful for Basque also; they did not find a proper explanation for that. In our experiments, however, we found that fine tags were useful for Basque. To retrieve the fine tags, we concatenate coarse tags with their refinement feature (“AZP”) values. 1550 In our preliminary experiments with na¨ıve spectral estimation, we preprocess rare words in the training set in two ways: (i) we replace them with their corresponding POS tags, and (ii) we replace them with their corresponding POS+morphological signatures. We follow Bj¨orkelund et al. (2013) and consider a word to be rare if it occurs less than 20 times in the training data. We experimented both with a version of the parser that does not ignore and does ignore letter cases, and discovered that the parser behaves better when case is not ignored. Spectral algorithms: subroutine choices The latent state optimization algorithm will work with either the clustering estimation algorithm of Narayan and Cohen (2015) or the spectral algorithm of Cohen et al. (2013). In our setup, we first run the latent state optimization algorithm with the clustering algorithm. We then run the spectral algorithm once with the optimized f from the clustering algorithm. We do that because the clustering algorithm is significantly faster to iteratively parse the development set, because it leads to sparse estimates. Our optimization algorithm is sensitive to the initialization of the number of latent states assigned to each nonterminals as it sequentially goes through the list of nonterminals and chooses latent state numbers for each nonterminal, keeping latent state numbers for other nonterminals fixed. In our setup, we start our search algorithm with the best model from the clustering algorithm, controlling for all hyperparameters; we tune f, the function which maps each nonterminal to a fixed number of latent states m, by running the vanilla version with different values of m for different languages. Based on our preliminary experiments, we set m to 4 for Basque, Hebrew, Polish and Swedish; 8 for German-N; 16 for German-T, Hungarian and Korean; and 24 for French. We use the same features for the spectral methods as in Narayan and Cohen (2015) for GermanN. For the SPMRL datasets we do not use the head features. These require linguistic understanding of the datasets (because they require head rules for propagating leaf nodes in the tree), and we discovered that simple heuristics for constructing these rules did not yield an increase in performance. We use the kmeans function in Matlab to do the clustering for the spectral algorithm of Narayan and Cohen (2015). We experimented with several versions of k-means, and discovered that the version that works best in a set of preliminary experiments is hard k-means.5 Decoding and evaluation For efficiency, we use a base PCFG without latent states to prune marginals which receive a value less than 0.00005 in the dynamic programming chart. This is just a bare-bones PCFG that is estimated using maximum likelihood estimation (with frequency count). The parser takes part-of-speech tagged sentences as input. We tag the German-N data using the Turbo Tagger (Martins et al., 2010). For the languages in the SPMRL data we use the MarMot tagger of M¨ueller et al. (2013) to jointly predict the POS and morphological tags.6 The parser itself can assign different part-of-speech tags to words to avoid parse failure. This is also particularly important for constituency parsing with morphologically rich languages. It helps mitigate the problem of the taggers to assign correct tags when long-distance dependencies are present. For all results, we report the F1 measure of the PARSEVAL metric (Black et al., 1991). We use the EVALB program7 with the parameter file COLLINS.prm (Collins, 1999) for the German-N data and the SPMRL parameter file, spmrl.prm, for the SPMRL data (Seddah et al., 2013). In this setup, the latent state optimization algorithm terminates in few hours for all datasets except French and German-T. The German-T data has 762 nonterminals to tune over a large development set consisting of 5,000 sentences, whereas, the French data has a high average sentence length of 31.43 in the development set.8 Following Narayan and Cohen (2015), we further improve our results by using multiple spectral models where noise is added to the underlying features in the training set before the estimation of each model.9 Using the optimized f, we estimate 5To be more precise, we use the Matlab function kmeans while passing it the parameter ‘start’=‘sample’ to randomly sample the initial centroid positions. In our experiments, we found that default initialization of centroids differs in Matlab14 (random) and in Matlab15 (kmeans++). Our estimation performs better with random initialization. 6See Bj¨orkelund et al. (2013) for the performance of the MarMot tagger on the SPMRL datasets. 7http://nlp.cs.nyu.edu/evalb/ 8To speed up tuning on the French data, we drop sentences with length >46 from the development set, dropping its size from 12,35 to 1,006. 9We only use the algorithm of Narayan and Cohen (2015) for the noisy model estimation. They have shown that decoding with noisy models performs better with their sparse 1551 lang. Basque French German-N German-T Hebrew Hungarian Korean Polish Swedish Bk van 69.2 79.9 81.7 87.8 83.9 71.0 84.1 74.5 rep 84.3 79.7 82.7 89.6 89.1 82.8 87.1 75.5 Cl van (pos) 69.8 73.9 75.7 78.3 88.0 81.3 68.7 90.3 70.9 van (rep) 78.6 73.7 78.8 88.1 84.7 76.5 90.4 71.4 opt 81.2∗ 76.7 77.8 81.7 90.1 87.2 79.2 92.0 75.2 Sp van 78.1 78.0 77.6 82.0 89.2 87.7 80.6 91.7 73.4 opt 79.0 78.1∗ 79.0∗ 82.9∗ 90.3∗ 87.8∗ 80.9∗ 91.7∗ 75.5∗ Bk multiple 87.4 82.5 85.0 90.5 91.1 84.6 88.4 79.5 Cl multiple 83.4 79.9 82.7 85.1 90.6 89.0 80.8 92.5 78.3 Hall et al. ’14 83.7 79.4 83.3 88.1 87.4 81.9 91.1 76.0 Crabb´e ’15 84.0 80.9 84.1 90.7 88.3 83.1 92.8 77.9 Table 2: Results on the development datasets. “Bk” makes use of the Berkeley parser with its coarse-to-fine mechanism to optimize the number of latent states (Petrov et al., 2006). For Bk, “van” uses the vanilla treatment of rare words using signatures defined by Petrov et al. (2006), whereas “rep.” uses the morphological signatures instead. “Cl” uses the algorithm of Narayan and Cohen (2015) and “Sp” uses the algorithm of Cohen et al. (2013). In Cl, “van (pos)” and “van (rep)” are vanilla estimations (i.e., each nonterminal is mapped to fixed number of latent states) replacing rare words by POS or POS+morphological signatures, respectively. The best of these two models is used with our optimization algorithm in “opt”. For Sp, “van” uses the best setting for unknown words as Cl. Best result in each column from the first seven rows is in bold. In addition, our best performing models from rows 3-7 are marked with ∗. “Bk multiple” shows the best results with the multiple models using product-of-grammars procedure (Petrov, 2010) and discriminative reranking (Charniak and Johnson, 2005). “Cl multiple” gives the results with multiple models generated using the noise induction and decoded using the hierarchical decoding (Narayan and Cohen, 2015). Bk results are not available on the development dataset for German-N. For others, we report Bk results from Bj¨orkelund et al. (2013). We also include results from Hall et al. (2014) and Crabb´e (2015). lang. Basque French German-N German-T Hebrew Hungarian Korean Polish Swedish Bk 74.7 80.4 80.1 78.3 87.0 85.2 78.6 86.8 80.6 Cl van 79.6 74.3 76.4 74.1 86.3 86.5 76.5 90.5 76.4 opt 81.4∗ 75.6 78.0 76.0 87.2 88.4 78.4 91.2 79.4 Sp van 79.9 78.7 78.4 78.0 87.8 89.1 80.3 91.8 78.4 opt 80.5 79.1∗ 79.4∗ 78.2∗ 89.0∗ 89.2∗ 80.0∗ 91.8∗ 80.9∗ Bk multiple 87.9 82.9 84.5 81.3 89.5 91.9 84.3 87.8 84.9 Cl multiple 83.4 80.4 82.7 80.4 89.2 89.9 80.3 92.4 82.8 Hall et al. ’14 83.4 79.7 78.4 87.2 88.3 80.2 90.7 82.0 F&M ’15 85.9 78.8 78.7 89.0 88.2 79.3 91.2 82.8 Crabb´e ’15 84.9 80.8 79.3 89.7 90.1 82.7 92.7 83.2 Table 3: Results on the test datasets. “Bk” denotes the best Berkeley parser result reported by the shared task organizers (Seddah et al., 2013). For the German-N data, Bk results are taken from Petrov (2010). “Cl van” shows the performance of the best vanilla models from Table 2 on the test set. “Cl opt” and “Sp opt” give the result of our algorithm on the test set. We also include results from Hall et al. (2014), Crabb´e (2015) and Fern´andez-Gonz´alez and Martins (2015). 80 models for each of noise induction mechanisms in Narayan and Cohen: Dropout, Gaussian (additive) and Gaussian (multiplicative). To decode with multiple noisy models, we train the MaxEnt reranker of Charniak and Johnson (2005).10 Hierarchical decoding with “maximal tree coverage” over MaxEnt models, further improves our accuracy. See Narayan and Cohen (2015) for more details on the estimation of a diverse set of models, and on decoding with them. estimates than the dense estimates of Cohen et al. (2013). 10Implementation: https://github.com/BLLIP/ bllip-parser. More specifically, we used the programs extract-spfeatures, cvlm-lbfgs and best-indices. extract-spfeatures uses head features, we bypass this for the SPMRL datasets by creating a dummy heads.cc file. cvlm-lbfgs was used with the default hyperparameters from the Makefile. 4.2 Results Table 2 and Table 3 give the results for the various languages.11 Our main focus is on comparing the coarse-to-fine Berkeley parser (Petrov et al., 2006) to our method. However, for the sake of completeness, we also present results for other parsers, such as parsers of Hall et al. (2014), Fern´andezGonz´alez and Martins (2015) and Crabb´e (2015). In line with Bj¨orkelund et al. (2013), our preliminary experiments with the treatment of rare words suggest that morphological features are useful for all SPMRL languages except French. Specifically, for Basque, Hungarian and Korean, improvements are significantly large. Our results show that the optimization of the 11See more in http://cohort.inf.ed.ac.uk/ lpcfg/. 1552 preterminals interminals all language P i xi P i yi div. #nts P i xi P i yi div. #nts P i xi P i yi div. #nts Basque 311 419 196 169 91 227 152 31 402 646 348 200 French 839 715 476 108 1145 1279 906 114 1984 1994 1382 222 German-N 425 567 416 109 323 578 361 99 748 1145 777 208 German-T 1251 890 795 378 1037 1323 738 384 2288 2213 1533 762 Hebrew 434 442 182 279 169 544 393 96 603 986 575 375 Hungarian 457 415 282 87 186 261 129 25 643 676 411 112 Korean 1077 980 547 331 218 220 150 21 1295 1200 697 352 Polish 252 311 197 135 132 180 86 63 384 491 283 198 Swedish 191 284 127 106 85 345 266 42 276 629 393 148 Table 4: A comparison of the number of latent states for the different nonterminals before and after running our latent state number optimization algorithm. The index i ranges over preterminals and interminals, with xi denoting the number of latent states for nonterminal i with the vanilla version of the estimation algorithm and yi denoting the number of latent states for nonterminal i after running the optimization algorithm. The divergence figure (“div.”) is a calculation of P i |xi −yi|. number of latent states with the clustering and spectral algorithms indeed improves these algorithms performance, and these increases generalize to the test sets as well. This was a point of concern, since the optimization algorithm goes through many points in the hypothesis space of parsing models, and identifies one that behaves optimally on the development set – and as such it could overfit to the development set. However, this did not happen, and in some cases, the increase in accuracy of the test set after running our optimization algorithm is actually larger than the one for the development set. While the vanilla estimation algorithms (without latent state optimization) lag behind the Berkeley parser for many of the languages, once the number of latent states is optimized, our parsing models do better for Basque, Hebrew, Hungarian, Korean, Polish and Swedish. For GermanT we perform close to the Berkeley parser (78.2 vs. 78.3). It is also interesting to compare the clustering algorithm of Narayan and Cohen (2015) to the spectral algorithm of Cohen et al. (2013). In the vanilla version, the spectral algorithm does better in most cases. However, these differences are narrowed, and in some cases, overcome, when the number of latent states is optimized. Decoding with multiple models further improves our accuracy. Our “Cl multiple” results lag behind “Bk multiple.” We believe this is the result of the need of head features for the MaxEnt models.12 Our results show that spectral learning is a viable alternative to the use of expectation12Bj¨orkelund et al. (2013) also use the MaxEnt raranker with multiple models of the Berkeley parser, and in their case also the performance after the raranking step is not always significantly better. See footnote 10 on how we create dummy head-features for our MaxEnt models. maximization coarse-to-fine techniques. As we discuss later, further improvements have been introduced to state-of-the-art parsers that are orthogonal to the use of a specific estimation algorithm. Some of them can be applied to our setup. 4.3 Further Analysis In addition to the basic set of parsing results, we also wanted to inspect the size of the parsing models when using the optimization algorithm in comparison to the vanilla models. Table 4 gives this analysis. In this table, we see that in most cases, on average, the optimization algorithm chooses to enlarge the number of latent states. However, for German-T and Korean, for example, the optimization algorithm actually chooses a smaller model than the original vanilla model. We further inspected the behavior of the optimization algorithm for the preterminals in German-N, for which the optimal model chose (on average) a larger number of latent states. Table 5 describes this analysis. We see that in most cases, the optimization algorithm chose to decrease the number of latent states for the various preterminals, but in some cases significantly increases the number of latent states.13 Our experiments dispel another “common wisdom” about spectral learning and training data size. It has been believed that spectral learning do not behave very well when small amounts of data are available (when compared to maximum likelihood estimation algorithms such as EM) – however we see that our results do better than the Berkeley parser for several languages with small 13Interestingly, most of the punctuation symbols, such as $∗LRB∗, $. and $,, drop their latent state number to a significantly lower value indicating that their interactions with other nonterminals in the tree are minimal. 1553 preterminal freq. b. a. preterminal freq. b. a. preterminal freq. b. a. preterminal freq. b. a. PWAT 64 2 2 TRUNC 614 8 1 PIS 1,628 8 8 KON 8,633 8 30 XY 135 3 1 VAPP 363 6 4 $*LRB* 13,681 8 6 PPER 4,979 8 100 NP|NN 88 2 1 PDS 988 8 8 ADJD 6,419 8 60 $. 17,699 8 3 VMINF 177 3 5 AVP|ADV 211 4 11 KOUS 2,456 8 1 APPRART 6,217 8 15 PTKA 162 3 1 FM 578 8 3 PIAT 1,061 8 8 ADJA 18,993 8 10 VP|VVINF 409 6 2 VVIMP 76 2 1 NP|PPER 382 6 1 APPR 26,717 8 7 PRELAT 94 2 1 KOUI 339 5 2 VVPP 5,005 8 20 VVFIN 13,444 8 3 AP|ADJD 178 3 1 VAINF 1,024 8 1 PP|PROAV 174 3 1 $, 16,631 8 1 APPO 89 2 2 PRELS 2,120 8 40 VAFIN 8,814 8 1 VVINF 4,382 8 10 PWS 361 6 1 CARD 6,826 8 8 PTKNEG 1,884 8 8 ART 35,003 8 10 KOKOM 800 8 37 NE 17,489 8 6 PTKZU 1,586 8 1 ADV 15,566 8 8 VP|VVPP 844 8 5 PRF 2,158 8 1 VVIZU 479 7 1 PIDAT 1,254 8 20 PWAV 689 8 1 PDAT 1,129 8 1 PPOSAT 2,295 8 6 NN 68,056 8 12 APZR 134 3 2 PROAV 1,479 8 10 PTKVZ 1,864 8 3 VMFIN 3,177 8 1 Table 5: A comparison of the number of latent states for each preterminal for the German-N model, before (“b.”) running the latent state number optimization algorithm and after running it (“a.”). Note that some of the preterminals denote unary rules that were collapsed (the nonterminals in the chain are separated by |). We do not show rare preterminals with b. and a. both being 1. training datasets, such as Basque, Hebrew, Polish and Hungarian. The source of this common wisdom is that ML estimators tend to be statistically “efficient:” they extract more information from the data than spectral learning algorithms do. Indeed, there is no reason to believe that spectral algorithms are statistically efficient. However, it is not clear that indeed for L-PCFGs with the EM algorithm, the ML estimator is statistically efficient either. MLE is statistically efficient under specific assumptions which are not clearly satisfied with L-PCFG estimation. In addition, when the model is “incorrect,” (i.e. when the data is not sampled from L-PCFG, as we would expect from natural language treebank data), spectral algorithms could yield better results because they can mimic a higher order model. This can be understood through HMMs. When estimating an HMM of a low order with data which was generated from a higher order model, EM does quite poorly. However, if the number of latent states (and feature functions) is properly controlled with spectral algorithms, a spectral algorithm would learn a “product” HMM, where the states in the lower order model are the product of states of a higher order.14 State-of-the-art parsers for the SPMRL datasets improve the Berkeley parser in ways which are orthogonal to the use of the basic estimation algorithm and the method for optimizing the number of latent states. They include transformations of the treebanks such as with unary rules (Bj¨orkelund et al., 2013), a more careful handling of unknown words and better use of morphological informa14For example, a trigram HMM can be reduced to a bigram HMM where the states are products of the original trigram HMM. tion such as decorating preterminals with such information (Bj¨orkelund et al., 2014; Sz´ant´o and Farkas, 2014), with careful feature specifications (Hall et al., 2014) and head-annotations (Crabb´e, 2015), and other techniques. Some of these techniques can be applied to our case. 5 Conclusion We demonstrated that a careful selection of the number of latent states in a latent-variable PCFG with spectral estimation has a significant effect on the parsing accuracy of the L-PCFG. We described a search procedure to do this kind of optimization, and described parsing results for eight languages (with nine datasets). Our results demonstrate that when comparing the expectationmaximization with coarse-to-fine techniques to our spectral algorithm with latent state optimization, spectral learning performs better on six of the datasets. Our results are comparable to other stateof-the-art results for these languages. Using a diverse set of models to parse these datasets further improves the results. Acknowledgments The authors would like to thank David McClosky for his help with running the BLLIP parser and his comments on the paper and also the three anonymous reviewers for their helpful comments. We also thank Eugene Charniak, DK Choe and Geoff Gordon for useful discussions. Finally, thanks to Djam´e Seddah for providing us with the SPMRL datasets and to Thomas M¨uller and Anders Bj¨orkelund for providing us the MarMot models. This research was supported by an EPSRC grant (EP/L02411X/1) and an EU H2020 grant (688139/H2020-ICT-2015; SUMMA). 1554 References Rapha¨el Bailly, Amaury Habrard, and Franc¸ois Denis. 2010. A spectral approach for probabilistic grammatical inference on trees. In Proceedings of International Conference on Algorithmic Learning Theory. Anders Bj¨orkelund, ¨Ozlem C¸ etino˘glu, Rich´ard Farkas, Thomas M¨ueller, and Wolfgang Seeker. 2013. (Re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. Anders Bj¨orkelund, ¨Ozlem C¸ etino˘glu, Agnieszka Fale´nska, Rich´ard Farkas, Thomas M¨uller, Wolfgang Seeker, and Zsolt Sz´ant´o. 2014. Introducing the IMS-Wrocław-Szeged-CIS entry at the SPMRL 2014 shared task: Reranking and morphosyntax meet unlabeled data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages. Ezra W. Black, Steven Abney, Daniel P. Flickinger, Claudia Gdaniec, Ralph Grishman, Philip Harrison, Donald Hindle, Robert J. P. Ingria, Frederick Jelinek, Judith L. Klavans, Mark Y. Liberman, Mitchell P. Marcus, Salim Roukos, Beatrice Santorini, and Tomek Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proceedings of DARPA Workshop on Speech and Natural Language. Sabine Brants, Stefanie Dipper, Peter Eisenberg, Silvia Hansen-Schirra, Esther K¨onig, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit. 2004. TIGER: Linguistic interpretation of a German corpus. Research on Language and Computation, 2(4):597–620. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of ACL. Shay B. Cohen and Michael Collins. 2014. A provably correct learning algorithm for latent-variable PCFGs. In Proceedings of ACL. Shay B. Cohen, Karl Stratos, Michael Collins, Dean F. Foster, and Lyle Ungar. 2012. Spectral learning of latent-variable PCFGs. In Proceedings of ACL. Shay B. Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. 2013. Experiments with spectral learning of latent-variable PCFGs. In Proceedings of NAACL. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Benoit Crabb´e. 2015. Multilingual discriminative lexicalized phrase structure parsing. In Proceedings of EMNLP. Daniel Fern´andez-Gonz´alez and Andr´e F. T. Martins. 2015. Parsing as reduction. In Proceedings of ACLIJCNLP. David Hall, Greg Durrett, and Dan Klein. 2014. Less grammar, more features. In Proceedings of ACL. Daniel Hsu, Sham M. Kakade, and Tong Zhang. 2009. A spectral algorithm for learning hidden Markov models. In Proceedings of COLT. Andr´e F. T. Martins, Noah A. Smith, Eric P. Xing, M´ario A. T. Figueiredo, and Pedro M. Q. Aguiar. 2010. TurboParsers: Dependency parsing by approximate variational inference. In Proceedings of EMNLP. Takuya Matsuzaki, Yusuke Miyao, and Junichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proceedings of ACL. Thomas M¨ueller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of EMNLP. Shashi Narayan and Shay B. Cohen. 2015. Diversity in spectral learning for natural language parsing. In Proceedings of EMNLP. Ankur P. Parikh, Le Song, Mariya Ishteva, Gabi Teodoru, and Eric P. Xing. 2012. A spectral algorithm for latent junction trees. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of COLING-ACL. Slav Petrov. 2010. Products of random latent variable grammars. In Proceedings of HLT-NAACL. Detlef Prescher. 2005. Head-driven PCFGs with latent-head statistics. In Proceedings of IWPT. Guillaume Rabusseau, Borja Balle, and Shay B. Cohen. 2016. Low-rank approximation of weighted tree automata. In Proceedings of The 19th International Conference on Artificial Intelligence and Statistics. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D. Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi´orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, Alina Wr´oblewska, and Eric Villemonte de la Cl´ergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. 1555 Wojciech Skut, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word order languages. In Proceedings of ANLP. Zsolt Sz´ant´o and Rich´ard Farkas. 2014. Special techniques for constituent parsing of morphologically rich languages. In Proceedings of EACL. 1556
2016
146
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1557–1566, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Stack-propagation: Improved Representation Learning for Syntax Yuan Zhang∗ CSAIL, MIT Cambridge, MA 02139, USA [email protected] David Weiss Google Inc New York, NY 10027, USA [email protected] Abstract Traditional syntax models typically leverage part-of-speech (POS) information by constructing features from hand-tuned templates. We demonstrate that a better approach is to utilize POS tags as a regularizer of learned representations. We propose a simple method for learning a stacked pipeline of models which we call “stack-propagation”. We apply this to dependency parsing and tagging, where we use the hidden layer of the tagger network as a representation of the input tokens for the parser. At test time, our parser does not require predicted POS tags. On 19 languages from the Universal Dependencies, our method is 1.3% (absolute) more accurate than a state-of-the-art graph-based approach and 2.7% more accurate than the most comparable greedy model. 1 Introduction In recent years, transition-based dependency parsers powered by neural network scoring functions have dramatically increased the state-of-theart in terms of both speed and accuracy (Chen and Manning, 2014; Alberti et al., 2015; Weiss et al., 2015). Similar approaches also achieve state-ofthe-art in other NLP tasks, such as constituency parsing (Durrett and Klein, 2015) or semantic role labeling (FitzGerald et al., 2015). These approaches all share a common principle: replace hand-tuned conjunctions of traditional NLP feature templates with continuous approximations learned by the hidden layer of a feed-forward network. ∗Research conducted at Google. However, state-of-the-art dependency parsers depend crucially on the use of predicted part-ofspeech (POS) tags. In the pipeline or stacking (Wolpert, 1992) method, these are predicted from an independently trained tagger and used as features in the parser. However, there are two main disadvantages of a pipeline: (1) errors from the POS tagger cascade into parsing errors, and (2) POS taggers often make mistakes precisely because they cannot take into account the syntactic context of a parse tree. The POS tags may also contain only coarse information, such as when using the universal tagset of Petrov et al. (2011). One approach to solve these issues has been to avoid using POS tags during parsing, e.g. either using semi-supervised clustering instead of POS tags (Koo et al., 2008) or building recurrent representations of words using neural networks (Dyer et al., 2015; Ballesteros et al., 2015). However, the best accuracy for these approaches is still achieved by running a POS tagger over the data first and combining the predicted POS tags with additional representations. As an alternative, a wide range of prior work has investigated jointly modeling both POS and parse trees (Li et al., 2011; Hatori et al., 2011; Bohnet and Nivre, 2012; Qian and Liu, 2012; Wang and Xue, 2014; Li et al., 2014; Zhang et al., 2015; Alberti et al., 2015). However, these approaches typically require sacrificing either efficiency or accuracy compared to the best pipeline model, and often they simply re-rank the predictions of a pipelined POS tagger. In this work, we show how to improve accuracy for both POS tagging and parsing by incorporating stacking into the architecture of a feed-forward network. We propose a continuous form of stacking that allows for easy backpropagation down the pipeline across multiple tasks, a process we call 1557 Backpropagation Task A Task B Traditional Stacking Stack-propagation Task A Task B Figure 1: Traditional stacking (left) vs. Stack-propagation (right). Stacking uses the output of Task A as features in Task B, and does not allow backpropagation between tasks. Stack-propagation uses a continuous and differentiable link between Task A and Task B, allowing for backpropagation from Task B into Task A’s model. Updates to Task A act as regularization on the model for Task B, ensuring the shared component is useful for both tasks. “stack-propagation” (Figure 1). At the core of this idea is that we use POS tags as regularization instead of features. Our model design for parsing is very simple: we use the hidden layer of a window-based POS tagging network as the representation of tokens in a greedy, transition-based neural network parser. Both networks are implemented with a refined version of the feed-forward network (Figure 3) from Chen and Manning (2014), as described in Weiss et al. (2015). We link the tagger network to the parser by translating traditional feature templates for parsing into feed-forward connections from the tagger to the parser (Figure 2). At training time, we unroll the parser decisions and apply stackpropagation by alternating between stochastic updates to the parsing or tagging objectives (Figure 4). The parser’s representations of tokens are thus regularized to be individually predictive of POS tags, even as they are trained to be useful for parsing when concatenated and fed into the parser network. This model is similar to the multi-task network structure of Collobert et al. (2011), where Collobert et al. (2011) shares a hidden layer between multiple tagging tasks. The primary difference here is that we show how to unroll parser transitions to apply the same principle to tasks with fundamentally different structure. The key advantage of our approach is that at test time, we do not require predicted POS tags for parsing. Instead, we run the tagger network up to the hidden layer over the entire sentence, and then dynamically connect the parser network to the tagger network based upon the discrete parser configurations as parsing unfolds. In this way, we avoid cascading POS tagging errors to the parser. As we show in Section 5, our approach can be used in conjunction with joint transition systems in the parser to improve both POS tagging as well as parsing. In addition, because the parser re-uses the representation from the tagger, we can drop all lexicalized features from the parser network, leading to a compact, faster model. The rest of the paper is organized as follows. In Section 2, we describe the layout of our combined architecture. In Section 3, we introduce stackpropagation and show how we train our model. We evaluate our approach on 19 languages from the Universal Dependencies treebank in Section 4. We observe a >2% absolute gain in labeled accuracy compared to state-of-the-art, LSTM-based greedy parsers (Ballesteros et al., 2015) and a >1% gain compared to a state-of-the-art, graphbased method (Lei et al., 2014). We also evaluate our method on the Wall Street Journal, where we find that our architecture outperforms other greedy models, especially when only coarse POS tags from the universal tagset are provided during training. In Section 5, we systematically evaluate the different components of our approach to demonstrate the effectiveness of stack-propagation compared to traditional types of joint modeling. We also show that our approach leads to large reductions in cascaded errors from the POS tagger. We hope that this work will motivate further research in combining traditional pipelined structured prediction models with deep neural architectures that learn intermediate representations in a task-driven manner. One important finding of this work is that, even without POS tags, our architecture outperforms recurrent approaches that build custom word representations using character-based LSTMs (Ballesteros et al., 2015). These results suggest that learning rich embeddings of words may not be as important as building an intermediate representation that takes multiple features of the surrounding context into account. Our results also suggest that deep models for dependency parsing may not discover POS classes when trained solely for parsing, even when it is fully within the capacity of the model. Designing architectures to apply stack-propagation in other coupled NLP tasks might yield significant accuracy improvements for deep learning. 1558 I ate a snack today … … { sed r { I ate a snack today { { I ate a snack today … … Tagger network (window-based) PRON VERB DET NOUN NOUN Parser network (state-based) I ate a snack Buffer I ate ate a ate snack ate Parser
 state SHIFT LEFT (nsubj) SHIFT SHIFT LEFT (det) ate a a snack today today RIGHT (dobj) ate today SHIFT RIGHT (tmod) today ate ate … … Stack Discrete state updates Connections depend on state (e.g. stack, parse) Discrete 
 link NN link Parser network Tagger network stack:1 stack:0 input:0 Figure 2: Detailed example of the stacked parsing model. Top: The discrete parser state, consisting of the stack and the buffer, is updated by the output of the parser network. In turn, the feature templates used by the parser are a function of the state. In this example, the parser has three templates, stack:0, stack:1, and input:0. Bottom: The feature templates create many-to-many connections from the hidden layer of the tagger to the input layer of the parser. For example, the predicted root of the sentence (“ate”) is connected to the input of most parse decisions. At test time, the above structure is constructed dynamically as a function of the parser output. Note also that the predicted POS tags are not directly used by the parser. 2 Continuous Stacking Model In this section, we introduce a novel neural network model for parsing and tagging that incorporates POS tags as a regularization of learned implicit representations. The basic unit of our model (Figure 3) is a simple, feed-forward network that has been shown to work very well for parsing tasks (Chen and Manning, 2014; Weiss et al., 2015). The inputs to this unit are feature matrices which are embedded and passed as input to a hidden layer. The final layer is a softmax prediction. We use two such networks in this work: a window-based version for tagging and a transition-based version for dependency parsing. In a traditional stacking (pipeline) approach, we would use the discrete predicted POS tags from the tagger as features in the parser (Chen and Manning, 2014). In our model, we instead feed the continuous hidden layer activations of the tagger network as input to the parser. The primary strength of our approach is that the parser has access to all of the features and information used by the POS tagger during training time, but it is allowed to make its own decisions at test time. To implement this, we show how we can reuse feature templates from Chen and Manning (2014) to specify the feed-forward connections from the tagger network to the parser network. An interesting consequence is that because this structure is a function of the derivation produced by the parser, the final feed-forward structure of the stacked model is not known until run-time. Prefixes Words Feature
 xtraction Suffixes Clusters bedding en (Relu) oftmax Words Suffixes X> 0 E0 X> 1 E1 X> GEG … P(y) / exp{β > y h0 + by} h0 = max{0, W>hX> g Egi + b0} Embedding Layer Hidden Layer Softmax Layer Feature templates D=64 D=16 Figure 3: Elementary NN unit used in our model. Feature matrices from multiple channels are embedded, concatenated together, and fed into a rectified linear hidden layer. In the parser network, the feature inputs are continuous representations from the tagger network’s hidden layer. However, because the connections for any specific parsing decision are fixed given the derivation, we can still extract examples for training off-line by unrolling the network structure from gold derivations. In other words, we can utilize our approach with the same simple stochastic optimization techniques used in prior works. Figure 2 shows a fully unrolled architecture on a simple example. 2.1 The Tagger Network As described above, our POS tagger follows the basic structure from prior work with embedding, hidden, and softmax layers. Like the “windowapproach” network of Collobert et al. (2011), the tagger is evaluated per-token, with features extracted from a window of tokens surrounding the target. The input consists of a rich set of fea1559 The fox jumps . stack buffer det (a) Parser configuration c. Figure 4: A schematic for the PARSER stack-propagation update. a: Example parser configuration c with corresponding stack and buffer. b: Forward and backward stages for the given single example. During the forward phase, the tagger networks compute hidden activations for each feature template (e.g. stack0 and buffer0), and activations are fed as features into the parser network. For the backward update, we backpropagate training signals from the parser network into each linked tagging example. Tagger Forward Feature Extractor Embedding Layer Hidden Layer w0=fox, w-1=the, … w0=jumps, w-1=fox, … Parser … … ∆implicit bu↵er0 ∆implicit stack0 Backward stack0.lab=det,… Embedding Layer … … Hidden Layer Softmax Layer Xlabel ximplicit stack0 (b) Forward and backward schema for (a). ximplicit bu↵er0 ximplicit stack0 ximplicit bu↵er0 ∆implicit stack0 ∆implicit bu↵er0 tures for POS tagging that are deterministically extracted from the training data. As in prior work, the features are divided into groups of different sizes that share an embedding matrix E. Features for each group g are represented as a sparse matrix Xg with dimension F g × V g, where F g is the number of feature templates in the group, and V g is the vocabulary size of the feature templates. Each row of Xg is a one-hot vector indicating the appearance of each feature. The network first looks up the learned embedding vectors for each feature and then concatenates them to form the embedding layer. This embedding layer can be written as: h0 = [XgEg | ∀g] (1) where Eg is a learned V g × Dg embedding matrix for feature group. Thus, the final size |h0| = P g F gDg is the sum of all embedded feature sizes. The specific features and their dimensions used in the tagger are listed in Table 1. Note that for all features, we create additional null value that triggers when features are extracted outside the scope of the sentence. We use a single hidden layer in our model and apply rectified linear unit (ReLU) activation function over the hidden layer outputs. A final softmax layer reads in the activations and outputs probabilities for each possible POS tag. 2.2 The Parser Network The parser component follows the same design as the POS tagger with the exception of the features and the output space. Instead of a windowbased classifier, features are extracted from an arcFeatures (g) Window D Symbols 1 8 Capitalization +/- 1 4 Prefixes/Suffixes (n = 2, 3) +/- 1 16 Words +/-3 64 Table 1: Window-based tagger feature spaces. “Symbols” indicates whether the word contains a hyphen, a digit or a punctuation. standard parser configuration1 c consisting of the stack s, the buffer b and the so far constructed dependencies (Nivre, 2004). Prior implementations of this model used up to four groups of discrete features: words, labels (from previous decisions), POS tags, and morphological attributes (Chen and Manning, 2014; Weiss et al., 2015; Alberti et al., 2015). In this work, we apply the same design principle but we use an implicitly learned intermediate representation in the parser to replace traditional discrete features. We only retain discrete features over the labels in the incrementally constructed tree (Figure 4). Specifically, for any token of interest, we feed the hidden layer of the tagger network evaluated for that token as input to the parser. We implement this idea by re-using the feature templates from prior work as indexing functions. We define this process formally as follows. Let fi(c) be a function mapping from parser configurations c to indices in the sentence, where i denotes each of our feature templates. For example, in Figure 4(a), when i =stack0, fi(c) is the in1Note that the “stack” in the parse configuration is separate from the “stacking” of the POS tagging network and the parser network (Figure 1). 1560 dex of “fox” in the sentence. Let htagger 1 (j) be the hidden layer activation of the tagger network evaluated at token j. We define the input Ximplicit by concatenating these tagger activations according to our feature templates: ximplicit i ≜htagger 1 (fi(c)). (2) Thus, the feature group Ximplicit is the rowconcatenation of the hidden layer activations of the tagger, as indexed by the feature templates. We have that F implicit is the number of feature templates, and V implicit = Htagger, the number of possible values is the number of hidden units in the tagger. Just as for other features, we learn an embedding matrix Eimplicit of size Himplicit × F implicit. Note that as in the POS tagger network, we reserve an additional null value for out of scope feature templates. A full example of this lookup process, and the resulting feedforward network connections created, is shown for a simple three-feature template consisting of the top two tokens on the stack and the first on the buffer in Figure 2. See Table 1 for the full list of 20 tokens that we extract for each state. 3 Learning with Stack-propagation In this section we describe how we train our stacking architecture. At a high level, we simply apply backpropagation to our proposed continuous form of stacking (hence “stack-propagation.”) There are two major issues to address: (1) how to handle the dynamic many-to-many connections between the tagger network and the parser network, and (2) how to incorporate the POS tag labels during training. Addressing the first point turns out to be fairly easy in practice: we simply unroll the gold trees into a derivation of (state, action) pairs that produce the tree. The key property of our parsing model is that the connections of the feedforward network are constructed incrementally as the parser state is updated. This is different than a generic recurrent model such as an LSTM, which passes activation vectors from one step to the next. The important implication at training time is that, unlike a recurrent network, the parser decisions are conditionally independent given a fixed history. In other words, if we unroll the network structure ahead of time given the gold derivation, we do not need to perform inference when training with respect to these examples. Thus, the overall training procedure is similar to that introduced in Chen and Manning (2014). To incorporate the POS tags as a regularization during learning, we take a fairly standard approach from multi-task learning. The objective of learning is to find parameters Θ that maximize the data log-likelihood with a regularization on Θ for both parsing and tagging: max Θ λ X x,y∈T log(PΘ(y | x))+ X c,a∈P log (PΘ(a | c)) , (3) where {x, y} are POS tagging examples extracted from individual tokens and {c, a} are parser (configuration, action) pairs extracted from the unrolled gold parse tree derivations, and λ is a tradeoff parameter. We optimize this objective stochastically by alternating between two updates: • TAGGER: Pick a POS tagging example and update the tagger network with backpropagation. • PARSER: (Figure 4) Given a parser configuration c from the set of gold contexts, compute both tagger and parser activations. Backpropagate the parsing loss through the stacked architecture to update both parser and tagger, ignoring the tagger’s softmax layer parameters. While the learning procedure is inspired from multi-task learning—we only update each step with regards one of the two likelihoods—there are subtle differences that are important. While a traditional multi-task learning approach would use the final layer of the parser network to predict both POS tags and parse trees, we predict POS tags from the first hidden layer of our model (the “tagger” network) only. We treat the POS labels as regularization of our parser and simply discard the softmax layer of the tagger network at test time. As we will show in Section 4, this regularization leads to dramatic gains in parsing accuracy. Note that in Section 5, we also show experimentally that stack-propagation is more powerful than the traditional multi-task approach, and by combining them together, we can achieve better accuracy on both POS and parsing tasks. 1561 Method ar bg da de en es eu fa fi fr hi id it iw nl no pl pt sl AVG NO TAGS B’15 LSTM 75.6 83.1 69.6 72.4 77.9 78.5 67.5 74.7 73.2 77.4 85.9 72.3 84.1 73.1 69.5 82.4 78.0 79.9 80.1 76.6 Ours (window) 76.1 82.9 70.9 71.7 79.2 79.3 69.1 77.5 72.5 78.2 87.1 71.8 83.6 76.2 72.3 83.2 77.8 79.0 79.8 77.3 UNIVERSAL TAGSET B’15 LSTM 74.6 82.4 68.1 73.0 77.9 77.8 66.0 75.0 73.6 78.0 86.8 72.2 84.2 74.5 68.4 83.3 74.5 80.4 78.1 76.2 Pipeline Ptag 73.7 83.6 72.0 73.0 79.3 79.5 63.0 78.0 66.9 78.5 87.8 73.5 84.2 75.4 70.3 83.6 73.4 79.5 79.4 76.6 RBGParser 75.8 83.6 73.9 73.5 79.9 79.6 68.0 78.5 65.4 78.9 87.7 74.2 84.7 77.6 72.4 83.9 75.4 81.3 80.7 77.6 Stackprop 77.0 84.3 73.8 74.2 80.7 80.7 70.1 78.5 74.5 80.0 88.9 74.1 85.8 77.5 73.6 84.7 79.2 80.4 81.8 78.9 Table 2: Labeled Attachment Score (LAS) on Universal Dependencies Treebank. Top: Results without any POS tag observations. “B’15 LSTM” is the character-based LSTM model (Ballesteros et al., 2015), while “Ours (window)” is our window-based architecture variant without stackprop. Bottom: Comparison against state-of-the-art baselines utilizing the POS tags. Paired t-tests show that the gain of Stackprop over all other approaches is significant (p < 10−5 for all but RBGParser, which is p < 0.02). 3.1 Implementation details Following Weiss et al. (2015), we use minibatched averaged stochastic gradient descent (ASGD) (Bottou, 2010) with momentum (Hinton, 2012) to learn the parameters Θ of the network. We use a separate learning rate, moving average, and velocity for the tagger network and the parser; the PARSER updates all averages, velocities, and learning rates, while the TAGGER updates only the tagging factors. We tuned the hyperparameters of momentum rate µ, the initial learning rate η0 and the learning rate decay step γ using held-out data. The training data for parsing and tagging can be extracted from either the same corpus or different corpora; in our experiments they were always the same. To trade-off the two objectives, we used a random sampling scheme to perform 10 epochs of PARSER updates and 5 epochs of TAGGER updates. In our experiments, we found that pretraining with TAGGER updates for one epoch before interleaving PARSER updates yielded faster training with better results. We also experimented using the TAGGER updates solely for initializing the parser and found that interleaving updates was crucial to obtain improvements over the baseline. 4 Experiments In this section, we evaluate our approach on several dependency parsing tasks across a wide variety of languages. 4.1 Experimental Setup We first investigated our model on 19 languages from the Universal Dependencies Treebanks v1.2.2 We selected the 19 largest cur2http://universaldependencies.org rently spoken languages for which the full data was freely available. We used the coarse universal tagset in our experiments with no explicit morphological annotations. To measure parsing accuracy, we report unlabeled attachment score (UAS) and labeled attachment score (LAS) computed on all tokens (including punctuation), as is standard for non-English datasets. For simplicity, we use the arc-standard (Nivre, 2004) transition system with greedy decoding. Because this transition system only produces projective trees, we first apply a projectivization step to all treebanks before unrolling the gold derivations during training. We make an exception for Dutch, where we observed a significant gain on development data by introducing the SWAP action (Nivre, 2009) and allowing non-projective trees. For models that required predicted POS tags, we trained a window-based tagger using the same features as the tagger component of our stacking model. We used 5-fold jackknifing to produce predicted tags on the training set. We found that the window-based tagger was comparable to a stateof-the-art CRF tagger for most languages. For every network we trained, we used the development data to evaluate a small range of hyperparameters, stopping training early when UAS no longer improved on the held-out data. We use H = 1024 hidden units in the parser, and H = 128 hidden units in the tagger. The parser embeds the tagger activations with D = 64. Note that following Ballesteros et al. (2015), we did not use any auxiliary data beyond that in the treebanks, such as pre-trained word embeddings. For a final set of experiments, we evaluated on the standard Wall Street Journal (WSJ) part of the Penn Treebank (Marcus et al., 1993)), dependencies generated from version 3.3.0 of the Stanford 1562 Method UAS LAS NO TAGS Dyer et al. (2015) 92.70 90.30 Ours (window-based) 92.85 90.77 UNIVERSAL TAGSET Pipeline (Ptag) 92.52 90.50 Stackprop 93.23 91.30 FINE TAGSET Chen & Manning (2014) 91.80 89.60 Dyer et al. (2015) 93.10 90.90 Pipeline (Ptag) 93.10 91.16 Stackprop 93.43 91.41 Weiss et al. (2015) 93.99 92.05 Alberti et al. (2015) 94.23 92.36 Table 3: WSJ Test set results for greedy and state-of-the-art methods. For reference, we show the most accurate models from Alberti et al. (2015) and Weiss et al. (2015), which use a deeper model and beam search for inference. converter (De Marneffe et al., 2006). We followed standard practice and used sections 2-21 for training, section 22 for development, and section 23 for testing. Following Weiss et al. (2015), we used section 24 to tune any hyperparameters of the model to avoid overfitting to the development set. As is common practice, we use pretrained word embeddings from the word2vec package when training on this dataset. 4.2 Results We present our main results on the Universal Treebanks in Table 2. We directly compare our approach to other baselines in two primary ways. First, we compare the effectiveness of our learned continuous representations with those of Alberti et al. (2015), who use the predicted distribution over POS tags concatenated with word embeddings as input to the parser. Because they also incorporate beam search into training, we re-implement a greedy version of their method to allow for direct comparisons of token representations. We refer to this as the “Pipeline (Ptag)” baseline. Second, we also compare our architecture trained without POS tags as regularization, which we refer to as “Ours (window-based)”. This model has the same architecture as our full model but with no POS supervision and updates. Since this model never observes POS tags in any way, we compare against a recurrent character-based parser (Ballesteros et al., Model Variant UAS LAS POS Arc-standard transition system Pipeline (Ptag) 81.56 76.55 95.14 Ours (window-based) 82.08 77.08 Ours (Stackprop) 83.38 78.78 Joint parsing & tagging transition system Pipeline (Ptag) 81.61 76.57 95.30 Ours (window-based) 82.58 77.76 94.92 Ours (Stackprop) 83.21 78.64 95.43 Table 4: Averaged parsing and POS tagging results on the UD treebanks for joint variants of stackprop. Given the windowbased architecture, stackprop leads to higher parsing accuracies than joint modeling (83.38% vs. 82.58%). 2015) which is state-of-the-art when no POS tags are provided.3 Finally, we compare to RGBParser (Lei et al., 2014), a state-of-the art graph-based (non-greedy) approach. Our greedy stackprop model outperforms all other methods, including the graph-based RBGParser, by a significant margin on the test set (78.9% vs 77.6%). This is despite the limitations of greedy parsing. Stackprop also yields a 2.3% absolute improvement in accuracy compared to using POS tag confidences as features (Pipeline Ptag). Finally, we also note that adding stackprop to our window-based model improves accuracy in every language, while incorporating predicted POS tags into the LSTM baseline leads to occasional drops in accuracy (most likely due to cascaded errors.) 5 Discussion Stackprop vs. other representations. One unexpected result was that, even without the POS tag labels at training time, our stackprop architecture achieves better accuracy than either the character-based LSTM or the pipelined baselines (Table 2). This suggests that adding windowbased representations–which aggregate over many features of the word and surrounding context– is more effective than increasing the expressiveness of individual word representations by using character-based recurrent models. In future work we will explore combining these two complementary approaches. We hypothesized that stackprop might provide larger gains over the pipelined model when the 3We thank Ballesteros et al. (2015) for their assistance running their code on the treebanks. 1563 Token married by a judge. Don’t judge a book by and walked away satisfied when I walk in the door Neighbors mesmerizing as a rat. doesn’t change the company’s tried, and tried hard upset when I went to A staple! won’t charge your phone and incorporated into I mean besides me day at a bar, then go don’t waste your money and belonged to the I felt as if I Pattern a [noun] ’nt [verb] and [verb]ed I [verb] Table 5: Four of examples of tokens in context, along with the three most similar tokens according to the tagger network’s activations, and the simple pattern exhibited. Note that this model was trained with the Universal tagset which does not distinguish verb tense. POS tags are very coarse. We tested this latter hypothesis on the WSJ corpus by training our model using the coarse universal tagsets instead of the fine tagset (Table 3). We found that stackprop achieves similar accuracy using coarse tagsets as the fine tagset, while the pipelined baseline’s performance drops dramatically. And while stackprop doesn’t achieve the highest reported accuracies on the WSJ, it does achieve competitive accuracies and outperforms prior state-of-the-art for greedy methods (Dyer et al., 2015). Stackprop vs. joint modeling. An alternative to stackprop would be to train the final layer of our architecture to predict both POS tags and dependency arcs. To evaluate this, we trained our window-based architecture with the integrated transition system of Bohnet and Nivre (2012), which augments the SHIFT transition to predict POS tags. Note that if we also apply stackprop, the network learns from POS annotations twice: once in the TAGGER updates, and again the PARSER updates. We therefore evaluated our window-based model both with and without stack-propagation, and with and without the joint transition system. We compare these variants along with our reimplementation of the pipelined model of Alberti et al. (2015) in Table 4. We find that stackprop is always better, even when it leads to “double counting” the POS annotations; in this case, the result is a model that is significantly better at POS tagging while marginally worse at parsing than stackprop alone. Reducing cascaded errors. As expected, we observe a significant reduction in cascaded POS tagging errors. An example from the English UD treebank is given in Figure 5. Across the 19 languages in our test set, we observed a 10.9% gain (34.1% vs. 45.0%) in LAS on tokens where the pipelined POS tagger makes a mistake, compared to a 1.8% gain on the rest of the corpora. Heterosexuals increasingly back gay marriage root advmod advmod amod dobj NOUN ADV ADV ADJ NOUN (a) Tree by a pipeline model. Heterosexuals increasingly back gay marriage nsubj advmod root amod dobj NOUN ADV ADV ADJ NOUN (b) Tree by Stackprop model. Figure 5: Example comparison between predictions by a pipeline model and a joint model. While both models predict a wrong POS tag for the word “back” (ADV rather than VERB), the joint model is robust to this POS error and predict the correct parse tree. Decreased model size. Previous neural parsers that use POS tags require learning embeddings for words and other features on top of the parameters used in the POS tagger (Chen and Manning, 2014; Weiss et al., 2015). In contrast, the number of total parameters for the combined parser and tagger in the Stackprop model is reduced almost by half compared to the Pipeline model, because the parser and tagger share parameters. Furthermore, compared to our implementation of the pipeline model, we observed that this more compact parser model was also roughly twice as fast. Contextual embeddings. Finally, we also explored the significance of the representations learned by the tagger. Unlike word embedding models, the representations used in our parser are constructed for each token based on its surrounding context. We demonstrate a few interesting trends we observed in Table 5, where we show the nearest neighbors to sample tokens in this contextual embedding space. These representations tend to represent syntactic patterns rather than individual words, distinguishing between the form (e.g. “judge” as a noun vs. a verb’) and context of tokens (e.g. preceded by a personal pronoun). 1564 6 Conclusions We present a stacking neural network model for dependency parsing and tagging. Through a simple learning method we call “stack-propagation,” our model learns effective intermediate representations for parsing by using POS tags as regularization of implicit representations. Our model outperforms all state-of-the-art parsers when evaluated on 19 languages of the Universal Dependencies treebank and outperforms other greedy models on the Wall Street Journal. We observe that the ideas presented in this work can also be as a principled way to optimize upstream NLP components for down-stream applications. In future work, we will extend this idea beyond sequence modeling to improve models in NLP that utilize parse trees as features. The basic tenet of stack-propagation is that the hidden layers of neural models used to generate annotations can be used instead of the annotations themselves. This suggests a new methodology to building deep neural models for NLP: we can design them from the ground up to incorporate multiple sources of annotation and learn far more effective intermediate representations. Acknowledgments We would like to thank Ryan McDonald, Emily Pitler, Chris Alberti, Michael Collins, and Slav Petrov for their repeated discussions, suggestions, and feedback, as well all members of the Google NLP Parsing Team. We would also like to thank Miguel Ballesteros for assistance running the character-based LSTM. References Chris Alberti, David Weiss, Greg Coppola, and Slav Petrov. 2015. Improved transition-based parsing and tagging with neural networks. In Proceedings of EMNLP 2015. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. Proceddings of EMNLP. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1455–1465. Association for Computational Linguistics. L´eon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), volume 1, pages 740–750. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. JMLR. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of Fifth International Conference on Language Resources and Evaluation, pages 449– 454. Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. ACL. Nicholas FitzGerald, Oscar Tckstrm, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP ’15). Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2011. Incremental joint pos tagging and dependency parsing in chinese. In IJCNLP, pages 1216–1224. Citeseer. Geoffrey E Hinton. 2012. A practical guide to training restricted boltzmann machines. In Neural Networks: Tricks of the Trade, pages 599–619. Springer. Terry Koo, Xavier Carreras P´erez, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In 46th Annual Meeting of the Association for Computational Linguistics, pages 595–603. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1381–1391. Zhenghua Li, Min Zhang, Wanxiang Che, Ting Liu, Wenliang Chen, and Haizhou Li. 2011. Joint models for chinese pos tagging and dependency parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1180–1191. Association for Computational Linguistics. 1565 Zhenghua Li, Min Zhang, Wanxiang Che, Ting Liu, and Wenliang Chen. 2014. Joint optimization for chinese POS tagging and dependency parsing. IEEE/ACM Transactions on Audio, Speech & Language Processing, pages 274–286. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, IncrementParsing ’04, pages 50–57, Stroudsburg, PA, USA. Association for Computational Linguistics. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351–359. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2011. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086. Xian Qian and Yang Liu. 2012. Joint chinese word segmentation, pos tagging and parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 501– 511. Association for Computational Linguistics. Zhiguo Wang and Nianwen Xue. 2014. Joint pos tagging and transition-based constituent parsing in chinese with non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 733–742. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of ACL 2015, pages 323–333. David H Wolpert. 1992. Stacked generalization. Neural networks. Yuan Zhang, Chengtao Li, Regina Barzilay, and Kareem Darwish. 2015. Randomized greedy inference for joint segmentation, POS tagging and dependency parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics, pages 42–52. 1566
2016
147
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1567–1578, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Inferring Perceived Demographics from User Emotional Tone and User-Environment Emotional Contrast Svitlana Volkova Johns Hopkins University (now at Pacific Northwest National Laboratory) Baltimore, MD, 21218, USA [email protected] Yoram Bachrach Microsoft Research Cambridge, UK CB1 2FB [email protected] Abstract We examine communications in a social network to study user emotional contrast – the propensity of users to express different emotions than those expressed by their neighbors. Our analysis is based on a large Twitter dataset, consisting of the tweets of 123,513 users from the USA and Canada. Focusing on Ekman’s basic emotions, we analyze differences between the emotional tone expressed by these users and their neighbors of different types, and correlate these differences with perceived user demographics. We demonstrate that many perceived demographic traits correlate with the emotional contrast between users and their neighbors. Unlike other approaches on inferring user attributes that rely solely on user communications, we explore the network structure and show that it is possible to accurately predict a range of perceived demographic traits based solely on the emotions emanating from users and their neighbors. 1 Introduction The explosion of social media services like Twitter, Google+ and Facebook have led to a growing application potential for personalization in human computer systems such as personalized intelligent user interfaces, recommendation systems, and targeted advertising. Researchers have started mining these massive volumes of personalized and diverse data produced in public social media with the goal of learning about their demographics (Burger et al., 2011; Zamal et al., 2012; Volkova et al., 2015) and personality (Golbeck et al., 2011; Kosinski et al., 2013),1 lan1https://apps.facebook.com/snpredictionapp/ guage variation (Eisenstein et al., 2014; Kern et al., 2014; Bamman et al., 2014),2 likes and interests (Bachrach et al., 2012; Lewenberg et al., 2015), emotions and opinions they express (Bollen et al., 2011b; Volkova and Bachrach, 2015), their well-being (Schwartz et al., 2013) and their interactions with online environment (Bachrach, 2015; Kalaitzis et al., 2016). The recent study has shown that the environment in a social network has a huge influence on user behavior and the tone of the messages users generate (Coviello et al., 2014; Ferrara and Yang, 2015a). People vary in the ways they respond to the emotional tone of their environment in a social network. Some people tend to send out messages with a positive emotional tone, while others tend to express more negative emotions such as sadness or fear. Some of us are likely to share peer messages that are angry, whereas others filter out such messages. In this work we focus on the problem of predicting user perceived demographics by examining the emotions expressed by users and their immediate neighbors. We first define the user emotional tone, the environment emotional tone, and the user-environment emotional contrast. Definition 1 Environment emotional tone is the proportion of tweets with a specific emotion produced by the user’s neighbors. For example, if the majority of tweets sent by the user’s neighbors express joy, that user has a positive environment. In contrast, a user is in a negative environment if most of his or her neighbors express anger. Definition 2 User emotional tone is the proportion of tweets with a specific emotion produced by a user. If a user mostly sends sad messages, he generates a sad emotional tone, while a user who mostly sends joyful messages has a joyful tone. 2http://demographicvis.uncc.edu/ 1567 Definition 3 User-environment emotional contrast is a degree to which user emotions differ from the emotions expressed by user neighbors. We say that users express more of an emotion when they express it more frequently than their neighbors, and say they express less of an emotion when they express it less frequently than their environment. There are two research questions we address in this work. First, we analyze how user demographic traits are predictive of the way they respond to the emotional tone of their environment in a social network. One hypothesis stipulates that the emotional response is a universal human trait, regardless of the specific demographic background (Wierzbicka, 1986; Cuddy et al., 2009). For example, men and women or young and old people should not be different in the way they respond to their emotional environment. An opposite hypothesis is a demographic dependent emotional contrast hypothesis, stipulating that user demographic background is predictive of the emotional contrast with the environment. For example, one might expect users with lower income to express negative emotion even when their environment expresses mostly positive emotions (high degree of emotional contrast), while users with higher income are more likely to express joy even if their environment expresses negative emotions (Kahneman and Deaton, 2010). We provide an empirical analysis based on a large dataset sampled from a Twitter network, supporting the demographic dependent emotional contrast hypothesis. We show that users predicted to be younger, without kids and with lower income tend to express more sadness compared to their neighbors but older users, with kids and higher income express less; users satisfied with life express less anger whereas users dissatisfied with life express more anger compared to their neighbors; optimists express more joy compared to their environment whereas pessimists express less. Furthermore, we investigate whether user demographic traits can be predicted from user emotions and user-environment emotional contrast. Earlier work on inferring user demographics has examined methods that use lexical features in social networks to predict demographic traits of the author (Burger et al., 2011; Van Durme, 2012; Conover et al., 2011; Bergsma et al., 2013; Bamman et al., 2014; Ruths et al., 2014; Sap et al., 2014). However, these are simply features of the text a user produces, and make limited use of the social embedding of the user in the network. Only limited amount of work briefly explored the network structure for user profiling (Pennacchiotti and Popescu, 2011a; Filippova, 2012; Zamal et al., 2012; Volkova et al., 2014; Culotta et al., 2015). In contrast, we investigate the predictive value of features that are completely dependent on the network: the emotional contrast between users and their neighbors. We also combine network (context) and text (content) features to further boost the performance of our models. Our results show that the emotional contrast of users is very informative regarding their demographic traits. Even a very small set of features consisting of the emotional contrast between users and their environment for each of Ekman’s six basic emotions and three sentiment types is sufficient to obtain high quality predictions for a range of user attributes. Carrying out such an analysis requires using a large dataset consisting of many users annotated with a variety of properties, and a large pool of their communications annotated with emotions and sentiments. Creating such a large dataset with the ground truth annotations is extremely costly; user sensitive demographics e.g., income, age is not available for the majority of social media including Twitter. Therefore, we rely our analysis on a large Twitter dataset annotated with demographics and affects using predictive models that can accurately infer user attributes, emotions and sentiments as discussed in Section 3. 2 Data User-Neighbor Dataset For the main analysis we collected a sample of U = 10, 741 Twitter users and randomly sampled their neighbors n ∈N(u) of different types including friends – u follows n(u), mentions – u mentions n(u) in his or her tweets e.g., @modollar1, and retweets – u retweets n(u) tweets e.g., RT @GYPSY. In total we sampled N= 141, 034 neighbors for U=10, 741 Relation ⊆U Nuniq Nall Ttotal Retweet R 9,751 32,197 48,262 6,345,722 Mention M 9,251 37,199 41,456 7,634,961 Friend F 10,381 43,376 51,316 8,973,783 TOTAL 10,741 112,772 141,034 24,919,528 Table 1: Twitter ego-network sample stats: U=123, 513 unique users with T=24, 919, 528 tweets, and E=141, 034 edges that represent social relations between Twitter users. 1568 users; on average 15 neighbors per user, 5 neighbors of each type with their 200 tweets; in total T=24, 919, 528 tweets as reported in Table 1. We also report the number of users with at least one neighbor of each type ⊆U and the number of unique neighbors Nuniq.3 Dataset Annotated with Demographics Unlike Facebook (Bachrach et al., 2012; Kosinski et al., 2013), Twitter profiles do not have personal information attached to the profile e.g., gender, age, education. Collecting self-reports (Burger et al., 2011; Zamal et al., 2012) brings data sampling biases which makes the models trained on selfreported data unusable for predictions of random Twitter users (Cohen and Ruths, 2013; Volkova et al., 2014). Asking social media users to fill personality questionnaires (Kosinski et al., 2013; Schwartz et al., 2013) is time consuming. An alternative way to collect attribute annotations is through crowdsourcing as has been effectively done recently (Flekova et al., 2015; Sloan et al., 2015; Preoiuc-Pietro et al., 2015). Thus, to infer sociodemographic traits for a large set of random Twitter users in our dataset we relied on pre-trained models learned from 5, 000 user profiles annotated via crowdsourcing4 released by Volkova and Bachrach (2015). We annotated 125, 513 user and neighbor profiles with eight sociodemographic traits. We only used a subset of sociodemographic traits from their original study to rely our analysis on models trained on annotations with high or moderate inter-annotator agreement. Additionally, we validated the models learned from the crowdsourced annotations on several public datasets labeled with gender as described in Section 2. Table 2 reports attribute class distributions and the number of profiles annotated. Validating Crowdsourced Annotations To validate the quality of perceived annotations we applied 4,998 user profiles to classify users from the existing datasets annotated with gender using approaches other than crowdsourcing. We ran experiments across three datasets (including perceived annotations): Burger et al.’s data (Burger et al., 3Despite the fact that we randomly sample user neighbors, there still might be an overlap between user neighborhoods dictated by the Twitter network design. Users can be reweeted or mentioned if they are in the friend neighborhood R ⊂F, M ⊂F. 4Data collection and perceived attribute annotation details are discussed in (Volkova and Bachrach, 2015) and (PreoiucPietro et al., 2015). Attribute Class Distribution Profiles Age ≤25 y.o. (65%), > 25 y.o. 3,883 Children No (84%), Yes 5,000 Education High School (68%), Degree 4,998 Ethnicity Caucasian (59%), Afr. Amer. 4,114 Gender Female (58%), Male 4,998 Income ≤$35K (66%), > $35K 4,999 Life Satisf. Satisfied (78%), Dissatisfied 3,789 Optimism Optimist (75%), Pessimist 3,562 Table 2: Annotation statistics of perceived user properties from Volkova and Bachrach (2015). 2011) – 71,312 users, gender labels were obtained via URL following users’ personal blogs; Zamal et al.’s data (Zamal et al., 2012) – 383 users, gender labels were collected via user names. Table 3 presents a cross-dataset comparison results. We consistently used logistic regression with L2 regularization and relied on word ngram features similar to Volkova and Bachrach (2015). Accuracies on a diagonal are obtained using 10-fold cross-validation. These results show that textual classifiers trained on perceived annotations have a reasonable agreement with the alternative prediction approaches. This provides another indication that the quality of crowdsourced annotations, at least for gender, is acceptable. There are no publicly available datasets annotated with other attributes from Table 2, so we cannot provide a similar comparison for other traits. Train\Test Users Burger Zamal Perceived Burger 71,312 0.71 0.71 0.83 Zamal 383 0.47 0.79 0.53 Perceived 4,998 0.58 0.66 0.84 Table 3: Cross-dataset accuracy for gender prediction on Twitter. Sentiment Dataset Our sentiment analysis dataset consists of seven publicly available Twitter sentiment datasets described in detail by Hassan Saif, Miriam Fernandez and Alani (2013). It includes T L S = 19, 555 tweets total (35% positive, 30% negative and 35% neutral) from Stanford,5 Sanders,6 SemEval-2013,7 JHU CLSP,8 SentiStrength,9 Obama-McCain Debate and Health Care.10 Emotion Dataset We collected our emotion dataset by bootstrapping noisy hashtag annota5http://help.sentiment140.com 6http://www.sananalytics.com/lab/twitter-sentiment/ 7http://www.cs.york.ac.uk/semeval-2013/task2/ 8http://www.cs.jhu.edu/∼svitlana/ 9http://sentistrength.wlv.ac.uk/ 10https://bitbucket.org/speriosu/updown/ 1569 Figure 1: Our approach for predicting user perceived sociodemographics and affects on Twitter. tions for six basic emotions argued by Ekman11 as have been successfully done before (De Choudhury et al., 2012; Mohammad and Kiritchenko, 2014). Despite the existing approaches do not disambiguate sarcastic hashtags e.g., It’s Monday #joy vs. It’s Friday #joy, they still demonstrate that a hashtag is a reasonable representation of real feelings (Gonz´alez-Ib´a˜nez et al., 2011). Moreover, in this work we relied on emotion hashtag synonyms collected from WordNet-Affect (Valitutti, 2004), GoogleSyns and Roget’s thesaurus to overweight the sarcasm factor. Overall, we collected T L E = 52, 925 tweets annotated with anger (9.4%), joy (29.3%), fear (17.1%), sadness (7.9%), disgust (24.5%) and surprise (15.6%). 3 Methodology Annotating User-Neighbor Data with Sociodemographics and Affects As shown in Figure 1, to perform our analysis we developed three machine learning components. The first component is a user-level demographic classifier ΦA(u), which can examine a set of tweets produced by any Twitter user and output a set of predicted demographic traits for that user, including age, education etc. Each demographic classifier relies on features extracted from user content. The second and third components are tweet-level emotion and sentiment classifiers ΦE(t) and ΦS(t), which can examine any tweet to predict the emotion and sentiment expressed in the tweet. For inferring user demographics, emotions and sentiments we trained log-linear models with L2 regularization using scikit-learn.12 Our models 11We prefer Ekman’s emotion classification over others e.g., Plutchik’s because we would like to compare the performance of our predictive models to other systems. 12Scikit-learn toolkit: http://scikit-learn.org/stable/ Email [email protected] to get access to pre-trained scikitlearn models and the data. rely on word ngram features extracted from user or neighbor tweets and affect-specific features described below. Perceived Attribute Classification Quality In Section 2 we compared attribute prediction models trained on crowdsourced data vs. other datasets. We showed that models learned from perceived annotations yield higher or comparable performance using the same features and learning algorithms. Given Twitter data sharing restriction,13 we could only make an indirect comparison with other existing approaches. We found that our models report higher accuracy compared to the existing approaches for gender: +0.12 (Rao et al., 2010), +0.04 (Zamal et al., 2012); and ethnicity: +0.08 (Bergsma et al., 2013), +0.15 (Pennacchiotti and Popescu, 2011b).14 For previously unexplored attributes we present the ROC AUC numbers obtained using our log-linear models trained on lexical features estimated using 10-fold c.v. in Table 6. Affect Classification Quality For emotion and opinion classification we trained tweet-level classifiers using lexical features extracted from tweets annotated with sentiments and six basic emotions. In addition to lexical features we extracted a set of stylistic features including emoticons, elongated words, capitalization, repeated punctuation, number of hashtags and took into account the clauselevel negation (Pang et al., 2002). Unlike other approaches (Wang and Manning, 2012), we observed that adding other linguistic features e.g., higher order ngrams, part-of-speech tags or lexicons did not improve classification performance. We demonstrate our emotion model prediction quality using 10-fold c.v. on our hashtag emotion dataset and compare it to other existing datasets in Table 4. Our results significantly outperform the existing approaches and are comparable with the state-of-the-art system for Twitter sentiment classification (Mohammad et al., 2013; Zhu et al., 2014) (evaluated on the official SemEval-2013 test set our system yields F1 as high as 0.66). Correlating User-Environment Emotional Contract and Demographics We performed 13Twitter policy restricts to sharing only tweet IDs or user IDs rather than complete tweets or user profiles. Thus, some profiles may become private or get deleted over time. 14Other existing work on inferring user attributes rely on classification with different categories or use regression e.g., age (Nguyen et al., 2011), income (Preoiuc-Pietro et al., 2015), and education (Li et al., 2014). 1570 #Emotion Wang (2012) Roberts (2012) Qadir (2013) Mohammad (2014) This work #anger 457,972 0.72 583 0.64 400 0.44 1,555 0.28 4,963 0.80 #disgust – – 922 0.67 – – 761 0.19 12,948 0.92 #fear 11,156 0.44 222 0.74 592 0.54 2,816 0.51 9,097 0.77 #joy 567,487 0.72 716 0.68 1,005 0.59 8,240 0.62 15,559 0.79 #sadness 489,831 0.65 493 0.69 560 0.46 3,830 0.39 4,232 0.62 #surprise 1,991 0.14 324 0.61 – – 3849 0.45 8,244 0.64 ALL: 1,991,184 – 3,777 0.67 4,500 0.53 21,051 0.49 52,925 0.78 Table 4: Emotion classification results (one vs. all for each emotion and 6 way for ALL) using our models compared to others. our user-environment emotional contrast analysis on a set of users U and neighbors N, where N(u) are the neighbors of u. For each user we defined a set of incoming T in and outgoing T out tweets. We then classified T in and T out tweets containing a sentiment s ∈S or emotion e ∈E, e.g. T in e , T out e and T in s , T out s where E →{anger, joy, fear, surprise, disgust, sad} and S →{positive, negative, neutral}. We measured the proportion of user’s incoming and outgoing tweets containing a certain emotion or sentiment e.g., pin sad = |T in sad|/|T in|. Then, for every user we estimated user-environment emotional contrast using the normalized difference between the incoming pin e and outgoing pout e emotion and sentiment proportions: ∆e = pout e −pin e pout e + pin e , ∀e ∈E. (1) We estimated user environment emotional tone and user emotional tone from the distributions over the incoming and outgoing affects e.g., Din s = {pin pos, . . . , pin neut} and Din e = {pin joy, . . . , pin fear}. We evaluated user environment emotional tone – proportions of incoming emotions Din e and sentiments Din s on a combined set of friend, mentioned and retweeted users; and user emotional tone – proportions of outgoing emotions Dout e and sentiment, Dout s from user tweets. We measure similarity between user emotional tone and environment emotional tone via Jensen Shannon Divergence (JSD). It is a symmetric and finite KL divergence that measures the difference between two probability distributions. JSD(Din||Dout) = 1 2I(Din||D) + 1 2I(Dout||D), (2) where D = 1 2I(Din||Dout), I = X e Dinln Din Dout . Next, we compared emotion and sentiment differences for the groups of users with different demographics A = {a0; a1} e.g., a0 = Male and a1 = Female using a non-parametric MannWhitney U test. For example, we measured the means µMale ∆e=joy and µFemale ∆e=joy within the group of users predicted to be Males or Females, and estimated whether these means are statistically significantly different. Finally, we used logistic regression to infer a variety of attributes for U = 10, 741 users using different features below: • outgoing emotional tone pout e , pout s – the overall emotional profile of a user (regardless the emotions projected in his environment); • user-environment emotional contrast ∆e, ∆s – show whether a certain emotion ∆e or sentiment ∆s is being expressed more or less by the user given the emotions he has been exposed to within his social environment; • lexical features extracted from user content – represent the distribution of word unigrams over the vocabulary. 4 Experimental Results For sake of brevity we will refer to a user predicted to be male as a male, and a tweet predicted to contain surprise as a simply containing surprise. Despite this needed shorthand it is important to recall that a major contribution of this work is that these results are based on automatically predicted properties, as compared to ground truth. We argue here that while such automatically predicted annotations may be less than perfect at the individual user or tweet level, they provide for meaningful analysis when done on the aggregate. 4.1 Similarity between User and Environment Emotional Tones We report similarities between user emotional tone and environment emotional tone for different groups of Twitter users using Jensen Shannon Divergence defined in the Eq. 2. We present the mean JSD values estimated over users with two contrasting attributes e.g., predicted to be a0=Male vs. a1=Female in Table 5. 1571 Sentiment Similarities Emotion Similarities Attribute [a0, a1] Retweet Friend All Retweet Friend All Income [≥$35K, < $35K] 22.1 19.4 23.7 21.1 18.6 15.1 18.7 17.8 33.6 33.3 20.0 17.6 Age [< 25 y.o, ≥25 y.o.] 19.0 22.7 20.2 25.3 14.3 19.7 17.2 19.9 32.8 34.7 17.0 21.1 Education [School, Degree] 19.4 22.1 21.1 23.8 15.2 18.5 18.0 18.1 33.9 32.1 18.1 18.9 Children [Yes, No] 24.2 19.9 28.4 21.4 23.2 15.6 20.9 17.8 35.6 33.2 22.6 18.0 Gender [Male, Female] 19.7 20.5 22.0 21.9 16.5 15.9 18.3 17.9 31.6 34.6 18.2 18.5 Ethnicity [Caucas., Afr. American] 20.5 19.4 21.7 22.5 15.8 16.9 17.2 19.8 32.5 35.2 17.5 20.1 Optimism [Pessimist, Optimist] 19.9 20.3 23.1 21.7 16.8 16.0 18.9 17.9 33.6 33.3 18.6 18.3 Life Satisfaction [Dissatis., Satisfied] 19.4 20.3 21.6 22.0 15.3 16.3 18.6 18.0 33.1 33.4 18.5 16.5 Table 5: Mean Jensen Shannon Divergences (displayed as percentages) between the incoming Din and outgoing Dout affects for contrastive attribute values a0 and a1. MannWhitney test results for differences between a0 and a1 JSD values are shown in blue (p-value ≤0.01), green (p-value ≤0.05), and gray (p-value ≤0.1). In Table 5 user environment emotional tones are estimated over different user-neighbor environments e.g., retweet, friend, and all neighborhoods including user mentions. We found that if user environment emotional tones are estimated from mentioned or retweeted neighbors the JSD values are lower compared to the friend neighbors. It means that users are more emotionally similar to the users they mention or retweet than to their friends (users they follow). We show that user incoming and outgoing sentiment tones Din s and Dout s estimated over all neighbors are significantly different for the majority of attributes except ethnicity. The divergences are consistently pronounced across all neighborhoods for income, age, education, optimism and children attributes (p-value ≤0.01). When the incoming and outgoing emotional tones Din e and Dout e are estimated over all neighbors, they are significantly different for all attributes except education and life satisfaction. 4.2 User-Environment Affect Contrast Our key findings discussed below confirm the demographic dependent emotional contrast hypothesis. We found that regardless demographics Twitter users tend to express more (U > N) sadness↑, disgust↑, joy↑and neutral↑opinions and express less (U < N) surprise↓, fear↓, anger↓, positive↓ and negative↓opinions compared to their neighbors except some exclusions below. Users predicted to be older and having kids express less sadness whereas younger users and user without kids express more. It is also known as the aging positivity effect recently picked up in social media (Kern et al., 2014). It states that older people are happier than younger people (Carstensen and Mikels, 2005). Users predicted to be pessimists express less joy compared to their neighbors whereas optimists express more. Users predicted to be dissatisfied with life express more anger compared to their environment whereas users predicted to be satisfied with life produce less. Users predicted to be older, with a degree and higher income express neutral opinions compared to their environment whereas users predicted to be younger, with lower income and high school education express more neutral opinions. Users predicted to be male and having kids express more positive opinions compared to their neighbors whereas female users and users without kids express less. We present more detailed analysis on user-environment emotional contrast for different attribute-affect combinations in Figure 2. Gender Female users have a stronger tendency to express more surprise and fear compared to their environment. They express less sadness compared to male users, supporting the claim that female users are more emotionally driven than male users in social media (Volkova et al., 2013). Male users have a stronger tendency to express more anger compared to female users. Female users tend to express less negative opinions compared to their environment. Age Younger users express more sadness but older users express similar level of sadness compared to their environment. It is also known as the aging positivity effect recently picked up in social media (Kern et al., 2014). It states that older people are happier than younger people (Carstensen and Mikels, 2005). They have a stronger tendency to express less anger but more disgust compared to younger users. Younger users have a stronger tendency to express less fear and negative sentiment compared to older users. Education Users with a college degree have a weaker tendency to express less sadness but stronger tendency to express more disgust from 1572 (a) Male vs. female (b) Older (above 25 y.o.) and younger (below 25 y.o.) (c) College degree vs. high school education (d) Users with vs. without children (e) Users with higher and lower income (f) African American vs. Caucasian users (g) Optimists vs. pessimists (h) Satisfied vs. dissatisfied with life Figure 2: Mean differences in affect proportions between users with contrasting demographics. Error bars show standard deviation for every e and s; p-values are shown as ≤0.01∗∗∗, ≤0.05∗∗and ≤0.1∗. their environment compared to users with high school education. They have a stronger tendency to express less anger but weaker tendency to express less fear. Users with high school education are likely to express more neutral opinions whereas users with a college degree express less. Children Users with children have a stronger tendency to express more joy, less surprise and fear from their environment compared to users without children. Users with children express less sadness and less positive opinions whereas users without children express more. Income Users with higher annual income have a weaker tendency to express more sadness and have a stronger tendency to express more disgust, less anger and fear from their environment. They tend to express less neutral opinions whereas users with lower income express more. Ethnicity Caucasian users have a stronger tendency to express more sadness and disgust from their environment whereas African American users have a stronger tendency to express more joy and less disgust. African American users have a stronger tendency to express less anger and surprise, but a weaker tendency to express less fear. Optimism Optimists express more joy from their environment whereas pessimists do not. Instead, pessimists have a stronger tendency to express more sadness and disgust compared to optimists. Optimists tend to express less fear. Pessimists tend to express less positive but more neutral opinions. Life Satisfaction User-environment emotional contrast for the life satisfaction attribute highly correlates with the optimism attribute. Users dissatisfied with life have a weaker tendency to ex1573 press more joy but a stronger tendency to express more sadness and disgust. They express more anger whereas users satisfied with life express less anger. Users satisfied with life have a stronger tendency to express less fear but weaker tendency to express less positive and negative opinions. In addition to our analysis on user-environment emotional contrast and demographics, we discovered which users are more “opinionated” relative to their environment on Twitter. In other words, users in which demographic group amplify less neutral but more subjective tweets e.g., positive, negative. As shown in Figure 2 male users are significantly more opinionated ≫than female users, users with kids > users without kids, users with a college degree ≫users with high school education, older users ≫younger users, users with higher income ≫users with lower income, optimists ≫pessimists, satisfied ≫dissatisfied with life, and African American > Caucasian users. 4.3 Inferring User Demographics From User-Environment Emotional Contrast Our findings in previous sections indicate that predicted demographics correlate with the emotional contrast between users and their environment in social media. We now show that by using user emotional tone and user-environment emotional contrast we can quite accurately predict many demographic properties of the user. Table 6 presents the quality of demographic predictions in terms of the area and the ROC curve based on different feature sets. These results indicate that most user traits can be quite accurately predicted using solely the emotional tone and emotional contrast features of the users. That is, given the emotions expressed by a user, and contrasting these with the emotions expressed by user environment, one can accurately infer many interesting properties of the user without using any additional information. We note that the emotional features have a strong influence on the prediction quality, resulting in significant absolute ROC AUC improvements over the lexical only feature set. Furthermore, we analyze correlations between users’ emotional-contrast features and their demographic traits. We found that differences between users and their environment in sadness, joy, anger and disgust could be used for predicting whether these users have children or not. Similarly, negative and neutral opinions, as opposed to joy, fear Attribute Lexical EmoSent All ∆ Age 0.63 0.74 (+0.11) 0.83 +0.20 Children 0.72 0.67 (–0.05) 0.80 +0.08 Education 0.77 0.78 (+0.01) 0.88 +0.11 Ethnicity 0.93 0.75 (–0.18) 0.97 +0.04 Gender 0.90 0.77 (–0.13) 0.95 +0.05 Income 0.73 0.77 (+0.04) 0.85 +0.12 Life Satisf. 0.72 0.77 (+0.05) 0.84 +0.12 Optimism 0.72 0.77 (+0.05) 0.83 +0.11 Table 6: Sociodemographic attribute prediction results in ROC AUC using Lexical, EmoSent (user emotional tone + user-environment emotional contrast), and All (EmoSent + Lexical) features extracted from user content. and surprise emotions can be predictive of users with higher education. 5 Discussion We examined the expression of emotions in social media, an issue that has also been the focus of recent work which analyzed emotion contagion using a controlled experiment on Facebook (Coviello et al., 2014). That study had important ethical implications, as it involved manipulating the emotional messages users viewed in a controlled way. It is not feasible for an arbitrary researcher to reproduce that experiment, as it was carried on the proprietary Facebook network. Further, the significant criticism of the ethical implications of the experimental design of that study (McNeal, 2014) indicates how problematic it is to carry out research on emotions in social networks using a controlled/interventional technique. Our methodology for studying emotions in social media thus uses an observational method, focusing on Twitter. We collected subjective judgments on a range of previously unexplored user properties, and trained machine learning models to predict those properties for a large sample of Twitter users. We proposed a concrete quantitative definition of the emotional contrast between users and their network environment, based on the emotions emanating from the users versus their neighbors. We showed that various demographic traits correlate with the emotional contrast between users and their environment, supporting the demographic-dependent emotional contrast hypothesis. We also demonstrated that it is possible to accurately predict many perceived demographic traits of Twitter users based solely on the emotional contrast between them and their neighbors. This suggests that the way in which the emotions we radiate differ from those expressed in our environment reveals a lot about our identity. 1574 We note that our analysis and methodology have several limitations. First, we only study correlations between emotional contrast and demographics. As such we do not make any causal inference regarding these parameters. Second, our labels regarding demographic traits of Twitter users were the result of subjective reports obtained using human annotations – subjective impressions (Flekova et al., 2016) of people rather than the true traits. Finally, we crawled both user and neighbor tweets within a short time frame (less than a week) and made sure that user and neighbor tweets were produced at the same time. Despite these limitations, our results do indicate higher performance compared to earlier work. Due to the large size of our dataset, we believe our findings are correct. 6 Related Work Personal Analytics in Social Media Earlier work on predicting latent user attributes based on Twitter data uses supervised models with lexical features for classifying four main attributes including gender (Rao et al., 2010; Burger et al., 2011; Zamal et al., 2012), age (Zamal et al., 2012; Kosinski et al., 2013; Nguyen et al., 2013), political preferences (Volkova and Van Durme, 2015) and ethnicity (Rao et al., 2010; Bergsma et al., 2013). Similar work characterizes Twitter users by using network structure information (Conover et al., 2011; Zamal et al., 2012; Volkova et al., 2014; Li et al., 2015), user interests and likes (Kosinski et al., 2013; Volkova et al., 2016), profile pictures (Bachrach et al., 2012; Leqi et al., 2016). Unlike the existing work, we not only focus on previously unexplored attributes e.g., having children, optimism and life satisfaction but also demonstrate that user attributes can be effectively predicted using emotion and sentiment features in addition to commonly used text features. Emotion and Opinion Mining in Microblogs Emotion analysis15 has been successfully applied to many kinds of informal and short texts including emails, blogs (Kosinski et al., 2013), and news headlines (Strapparava and Mihalcea, 2007), but emotions in social media, including Twitter and Facebook, have only been investigated recently. Researchers have used supervised learning models trained on lexical word ngram features, synsets, 15EmoTag: http://nil.fdi.ucm.es/index.php?q=node/186 emoticons, topics, and lexicon frameworks to determine which emotions are expressed on Twitter (Wang et al., 2012; Roberts et al., 2012; Qadir and Riloff, 2013; Mohammad and Kiritchenko, 2014). In contrast, sentiment classification in social media has been extensively studied (Pang et al., 2002; Pang and Lee, 2008; Pak and Paroubek, 2010; Hassan Saif, Miriam Fernandez and Alani, 2013; Nakov et al., 2013; Zhu et al., 2014). Emotion Contagion in Social Networks Emotional contagion theory states that emotions and sentiments of two messages posted by friends are more likely to be similar than those of two randomly selected messages (Hatfield and Cacioppo, 1994). There have been recent studies about emotion contagion in massively large social networks (Fan et al., 2013; Ferrara and Yang, 2015b; Bollen et al., 2011a; Ferrara and Yang, 2015a). Unlike these papers, we do not aim to model the spread of emotions or opinions in a social network. Instead, given both homophilic and assortative properties of a Twitter social network, we study how emotions expressed by user neighbors correlate with user emotions, and whether these correlations depend on user demographic traits. 7 Summary We examined a large-scale Twitter dataset to analyze the relation between perceived user demographics and the emotional contrast between users and their neighbors. Our results indicated that many sociodemographic traits correlate with user-environment emotional contrast. Further, we showed that one can accurately predict a wide range of perceived demographics of a user based solely on the emotions expressed by that user and user’s social environment. Our findings may advance the current understanding of social media population, their online behavior and well-being (Nguyen et al., 2015). Our observations can effectively improve personalized intelligent user interfaces in a way that reflects and adapts to user-specific characteristics and emotions. Moreover, our models for predicting user demographics can be effectively used for a variety of downstream NLP tasks e.g., text classification (Hovy, 2015), sentiment analysis (Volkova et al., 2013), paraphrasing (PreotiucPietro et al., 2016), part-of-speech tagging (Hovy and Søgaard, 2015; Johannsen et al., 2015) and visual analytics (Dou et al., 2015). 1575 References Yoram Bachrach, Michal Kosinski, Thore Graepel, Pushmeet Kohli, and David Stillwell. 2012. Personality and patterns of Facebook usage. In Proceedings of ACM WebSci, pages 24–32. Yoram Bachrach. 2015. Human judgments in hiring decisions based on online social network profiles. In Data Science and Advanced Analytics (DSAA), 2015. 36678 2015. IEEE International Conference on, pages 1–10. IEEE. David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. Journal of Sociolinguistics, 18(2):135–160. Shane Bergsma, Mark Dredze, Benjamin Van Durme, Theresa Wilson, and David Yarowsky. 2013. Broadly improving user classification via communication-based name and location clustering on Twitter. In Proceedings of NAACL-HLT, pages 1010–1019. Johan Bollen, Bruno Gonc¸alves, Guangchen Ruan, and Huina Mao. 2011a. Happiness is assortative in online social networks. Artificial life, 17(3):237–251. Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011b. Twitter mood predicts the stock market. Journal of Computational Science, 2(1):1–8. John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proceedings of EMNLP, pages 1301– 1309. Laura L Carstensen and Joseph A Mikels. 2005. At the intersection of emotion and cognition aging and the positivity effect. Current Directions in Psychological Science, 14(3):117–121. Raviv Cohen and Derek Ruths. 2013. Classifying political orientation on Twitter: It’s not easy! In Proceedings of ICWSM. Michael D. Conover, Bruno Gonc¸alves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011. Predicting the political alignment of Twitter users. In Proceedings of Social Computing. Lorenzo Coviello, Yunkyu Sohn, Adam DI Kramer, Cameron Marlow, Massimo Franceschetti, Nicholas A Christakis, and James H Fowler. 2014. Detecting emotional contagion in massive social networks. PloS one, 9(3):e90315. Amy JC Cuddy, Susan T Fiske, Virginia SY Kwan, Peter Glick, St´ephanie Demoulin, Jacques-Philippe Leyens, Michael Harris Bond, Jean-Claude Croizet, Naomi Ellemers, Ed Sleebos, et al. 2009. Stereotype content model across cultures: Towards universal similarities and some differences. British Journal of Social Psychology, 48(1):1–33. Aron Culotta, Nirmal Kumar Ravi, and Jennifer Cutler. 2015. Predicting the demographics of Twitter users from website traffic data. In Proceedings of AAAI. Munmun De Choudhury, Michael Gamon, and Scott Counts. 2012. Happy, nervous or surprised? Classification of human affective states in social media. In Proceedings of ICWSM. Wenwen Dou, Isaac Cho, Omar ElTayeby, Jaegul Choo, Xiaoyu Wang, and William Ribarsky. 2015. Demographicvis: Analyzing demographic information based on user generated content. In Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on, pages 57–64. IEEE. Jacob Eisenstein, Brendan O’Connor, Noah A Smith, and Eric P Xing. 2014. Diffusion of lexical change in social media. PloS one, 9(11):e113114. Rui Fan, Jichang Zhao, Yan Chen, and Ke Xu. 2013. Anger is more influential than joy: sentiment correlation in Weibo. arXiv preprint arXiv:1309.2402. Emilio Ferrara and Zeyao Yang. 2015a. Measuring emotional contagion in social media. PloS one, 10(11):e0142390. Emilio Ferrara and Zeyao Yang. 2015b. Quantifying the effect of sentiment on information diffusion in social media. PeerJ Computer Science, 1:e26. Katja Filippova. 2012. User demographics and language in an implicit social network. In Proceedings of EMNLP-CoNLL. Lucie Flekova, Salvatore Giorgi, Jordan Carpenter, Lyle Ungar, and Daniel Preotiuc-Pietro. 2015. Analyzing crowdsourced assessment of user traits through Twitter posts. Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing. Lucie Flekova, Jordan Carpenter, Salvatore Giorgi, Lyle Ungar, and Daniel Preotiuc-Pietro. 2016. Analyzing biases in human perception of user age and gender from text. In Proceedings of the Association for Computational Linguistics. Jennifer Golbeck, Cristina Robles, Michon Edmondson, and Karen Turner. 2011. Predicting personality from Twitter. In Proceedings of SocialCom/PASSAT. Roberto Gonz´alez-Ib´a˜nez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in Twitter: A closer look. In Proceedings of ACL, pages 581–586. Yulan He Hassan Saif, Miriam Fernandez and Harith Alani. 2013. Evaluation datasets for Twitter sentiment analysis: A survey and a new dataset, the stsgold. First ESSEM workshop. Elaine Hatfield and John T Cacioppo. 1994. Emotional contagion. Cambridge university press. 1576 Dirk Hovy and Anders Søgaard. 2015. Tagging performance correlates with author age. In Proceedings of the Association for Computational Linguistics (ACL), pages 483–488. Dirk Hovy. 2015. Demographic factors improve classification performance. Proceedings of ACL. Anders Johannsen, Dirk Hovy, and Anders Søgaard. 2015. Cross-lingual syntactic variation over age and gender. In Proceedings of CoNLL. Daniel Kahneman and Angus Deaton. 2010. High income improves evaluation of life but not emotional well-being. Proceedings of the National Academy of Sciences, 107(38):16489–16493. Alfredo Kalaitzis, Maria Ivanova Gorinova, Yoad Lewenberg, Yoram Bachrach, Michael Fagan, Dean Carignan, and Nitin Gautam. 2016. Predicting gaming related properties from twitter profiles. In 2016 IEEE Second International Conference on Big Data Computing Service and Applications (BigDataService), pages 28–35. IEEE. Margaret L Kern, Johannes C Eichstaedt, H Andrew Schwartz, Gregory Park, Lyle H Ungar, David J Stillwell, Michal Kosinski, Lukasz Dziurzynski, and Martin EP Seligman. 2014. From sooo excited!!! to so proud: Using language to study development. Developmental psychology, 50(1):178. Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private traits and attributes are predictable from digital records of human behavior. National Academy of Sciences. Liu Leqi, Daniel Preot¸iuc-Pietro, Zahra Riahi, Mohsen E. Moghaddam, and Lyle Ungar. 2016. Analyzing personality through social media profile picture choice. ICWSM. Yoad Lewenberg, Yoram Bachrach, and Svitlana Volkova. 2015. Using emotions to predict user interest areas in online social networks. In Data Science and Advanced Analytics (DSAA), 2015. 36678 2015. IEEE International Conference on, pages 1– 10. IEEE. Jiwei Li, Alan Ritter, and Eduard Hovy. 2014. Weakly supervised user profile extraction from Twitter. Proceedings of ACL. Jiwei Li, Alan Ritter, and Dan Jurafsky. 2015. Learning multi-faceted representations of individuals from heterogeneous evidence using neural networks. arXiv preprint arXiv:1510.05198. Gregory McNeal. 2014. Facebook manipulated user news feeds to create emotional responses. Forbes. Saif M. Mohammad and Svetlana Kiritchenko. 2014. Using hashtags to capture fine emotion categories from tweets. Computational Intelligence. Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the stateof-the-art in sentiment analysis of tweets. In Proceedings of SemEval, June. Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. Semeval-2013 task 2: Sentiment analysis in Twitter. In Proceedings of SemEval, pages 312–320. Dong Nguyen, Noah A. Smith, and Carolyn P. Ros´e. 2011. Author age prediction from text using linear regression. In Proceedings of LaTeCH, pages 115– 123. Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. ”How old do you think I am?” A study of language and age in Twitter. In Proceedings of ICWSM, pages 439–448. Dong Nguyen, A Seza Do˘gru¨oz, Carolyn P Ros´e, and Franciska de Jong. 2015. Computational sociolinguistics: A survey. arXiv preprint arXiv:1508.07544. Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. In LREC. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations of Trends in IR, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79–86. Marco Pennacchiotti and Ana-Maria Popescu. 2011a. Democrats, republicans and starbucks afficionados: user classification in Twitter. In Proceedings of KDD, pages 430–438. Marco Pennacchiotti and Ana Maria Popescu. 2011b. A machine learning approach to Twitter user classification. In Proceedings of ICWSM, pages 281–288. Daniel Preoiuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015. Studying user income through language, behaviour and affect in social media. PLoS ONE, 10(9):e0138717, 09. Daniel Preotiuc-Pietro, Wei Xu, and Lyle Ungar. 2016. Discovering user attribute stylistic differences via paraphrasing. Ashequl Qadir and Ellen Riloff. 2013. Bootstrapped learning of emotion hashtags #hashtags4you. WASSA 2013. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in Twitter. In Proceedings of SMUC, pages 37–44. 1577 Kirk Roberts, Michael A Roach, Joseph Johnson, Josh Guthrie, and Sanda M Harabagiu. 2012. Empatweet: Annotating and detecting emotions on Twitter. In Proceedings of LREC. Derek Ruths, J¨urgen Pfeffer, et al. 2014. Social media for large studies of behavior. Science, 346(6213):1063–1064. Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Lyle Ungar, and Hansen Andrew Schwartz. 2014. Developing age and gender predictive lexica over social media. In Proceedings of EMNLP. Hansen Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Richard E Lucas, Megha Agrawal, Gregory J Park, Shrinidhi K Lakshmikanth, Sneha Jha, Martin EP Seligman, et al. 2013. Characterizing geographic variation in well-being using tweets. In ICWSM. Luke Sloan, Jeffrey Morgan, Pete Burnap, and Matthew Williams. 2015. Who tweets? deriving the demographic characteristics of age, occupation and social class from twitter user meta-data. PloS one, 10(3):e0115545. Carlo Strapparava and Rada Mihalcea. 2007. Semeval2007 task 14: Affective text. In Proceedings of SemEval, pages 70–74. Ro Valitutti. 2004. Wordnet-affect: an affective extension of wordnet. In Proceedings of LREC, pages 1083–1086. Benjamin Van Durme. 2012. Streaming analysis of discourse participants. In Proceedings of EMNLP, pages 48–58. Svitlana Volkova and Yoram Bachrach. 2015. On predicting sociodemographic traits and emotions from communications in social networks and their implications to online self-disclosure. Cyberpsychology, Behavior, and Social Networking, 18(12):726–736. Svitlana Volkova and Benjamin Van Durme. 2015. Online bayesian models for personal analytics in social media. In Proceedings of AAAI. Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment analysis in social media. In Proceedings of EMNLP. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political preferences from streaming communications. In Proceedings of ACL, pages 186–196. Svitlana Volkova, Yoram Bachrach, Michael Armstrong, and Vijay Sharma. 2015. Inferring latent user properties from texts published in social media (demo). In Proceedings of AAAI. Svitlana Volkova, Yoram Bachrach, and Benjamin Van Durme. 2016. Mining user interests to predict perceived psycho-demographic traits on Twitter. Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 90–94. Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P Sheth. 2012. Harnessing Twitter “big data” for automatic emotion identification. In Proceedings of SocialCom, pages 587–592. Anna Wierzbicka. 1986. Human emotions: universal or culture-specific? American Anthropologist, 88(3):584–594. Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors. In Proceedings of ICWSM. Xiaodan Zhu, Svetlana Kiritchenko, and Saif M Mohammad. 2014. NRC-Canada-2014: Recent improvements in the sentiment analysis of tweets. SemEval. 1578
2016
148
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1579–1588, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Prototype Synthesis for Model Laws Matthew Burgess University of Michigan [email protected] Eugenia Giraudy UC Berkeley & YouGov [email protected] Eytan Adar University of Michigan [email protected] Abstract State legislatures often rely on existing text when drafting new bills. Resource and expertise constraints, which often drive this copying behavior, can be taken advantage of by lobbyists and special interest groups. These groups provide model bills, which encode policy agendas, with the intent that the models become actual law. Unfortunately, model legislation is often opaque to the public–both in source and content. In this paper we present LOBBYBACK, a system that reverse engineers model legislation from observed text. LOBBYBACK identifies clusters of bills which have text reuse and generates “prototypes” that represent a canonical version of the text shared between the documents. We demonstrate that LOBBYBACK accurately reconstructs model legislation and apply it to a dataset of over 550k bills. 1 Introduction Beginning in 2005, a number of states began passing “Stand Your Ground” laws–legal protections for the use of deadly force in self-defense. Within a few years, at least two dozen states implemented a version of the this legislation (Garrett and Jansa, 2015). Though each state passed its own variant, there is striking similarity in the text of the legislation. While seemingly “viral” the expedient adoption of these laws was not the result of an organic diffusion process, but rather more centralized efforts. An interest group, the American Legislative Exchange Council (ALEC), drafted model legislation (in this case modeled on Florida’s law) and lobbied to have the model law enacted in other states. While the influence of the lobbyists through model laws grows, the hidden nature of their original text (and source) creates a troubling opacity. Reconstructing such hidden text through analysis of observed, potentially highly mutated, copies poses an interesting and challenging NLP problem. We refer to this as the Dark Corpora problem. Since legislatures are not required to cite the source of the text that goes into a drafted bill, the bills that share text are unknown beforehand. The first problem therein lies in identifying clusters of bills with reused text. Once a cluster is identified, a second challenge is the reconstruction of the original or prototype bill that corresponds to the observed text. The usual circumstances under which a model law is adopted by individual states involves “mutation.” This may be as simple as modifying parameters to the existing policy (e.g., changing the legal limit allowed of medical marijuana possession to 3.0 ounces from 2.5) or can be more substantial, with significant additions or deletions of different conditions of a policy. Interestingly, the need to maintain the objectives of the law creates a pressure to retain a legally meaningful structure and precise language–thus changes need to satisfy the existing laws of the state but carry out the intent of the model. Both subtle changes of this type, and more dramatic ones, are of great interest to political scientists. A specific application, for example, may be predicting likely spots for future modifications as additional states adopt the law. Our challenge is to identify and represent “prototype” sentences that capture the similarity of observed sentences while also capturing the variation. In this paper we propose LOBBYBACK, a system that automatically identifies clusters of documents that exhibit text reuse, and generates “prototypes” that represent a canonical version of text shared between the documents. In order to syn1579 thesize the prototypes, LOBBYBACK first extracts clusters of sentences, where each sentence pertains to the same policy but can exhibit variation. LOBBYBACK then uses a greedy multi-sequence alignment algorithm to identify an approximation of the optimal alignment between the sentences. Prototype sentences are synthesized by computing a consensus sentence from the multi-sentence alignment. As sentence variants are critical in understanding the effect of the model legislation, we can not simply generate a single “common” summary sentence as a prototype. Rather, LOBBYBACK creates a data structure that captures this variability in text for display to the end-user. With LOBBYBACK, end-users can quickly identify clusters of text reuse to better understand what type of policies are diffused across states. In other applications, sentence prototypes can be used by journalists and researchers to discover previously unknown model legislation and the involvement of lobbying organizations. For example, prototype text can be compared to the language or policy content of interest groups documents and accompanied with qualitative research it can help discover which lobbyists have drafted this legislation. We evaluated LOBBYBACK on the task of reconstructing 122 known model legislation documents. Our system was able to achieve an average of 0.6 F1 score based on the number of prototype sentences that had high similarity with sentences from the model legislation. We have also run LOBBYBACK on the entire corpus of state legislation (571,000 documents) from openstates.org as an open task. The system identified 4,446 clusters for which we generated prototype documents. We have released the resulting data set and code at http://github.com/mattburg/LobbyBack. LOBBYBACK is novel in fully automating and scaling the pipeline of model-legislation reconstruction. The output of this pipeline captures both the likely “source sentences” but also the variations of those sentences. 2 Related Work While no specific system or technique has focused on the problem of legislative document reconstruction, we find related work in a number of domains. Multi-document summarization (MDS), for example, can be used to partially model the underlying problem–generating a representative document from multiple sources. Extractive MDS, in particular, is promising in that representative sentences are identified. Early work in extractive summarization include greedy approaches such as that proposed by Carbonell and Goldstein (1998). The algorithm uses an objective function which trades off between relevance and redundancy. Global optimization techniques attempt to generate “summaries” (selected sets of sentences or utterances) that maximize an objective based on informativeness, redundancy and/or length of summary. These have shown superior performance to greedy algorithms (Yih et al., 2007; Gillick et al., 2009). Approaches based on neural networks have recently been proposed for ranking candidate sentences (Cao et al., 2015). Graph based methods, such as LexRank (Erkan and Radev, 2004), have also proven effective for MDS. Extensions to this approach combine sentence ranking with clustering in order to minimize redundancy (Qazvinian and Radev, 2008; Wan and Yang, 2008; Cai and Li, 2013). The C-LexRank algorithm (Qazvinian and Radev, 2008), in particular, uses this combination and inspired our high level design. Similar to our approach, Wang and Cardie (2013) propose a method for generating templates of meeting summaries using multisequence alignment. Though related, it is important to note that the objectives of summarization (informativeness, reduced redundancy, etc.) are not entirely consistent with our task. For example, using the n-gram cooccurrence based ROUGE score would not be sufficient at evaluating LOBBYBACK. Our goal is to accurately reconstruct entire sentences of a hidden document given observed mutations of that document. Additionally, our goal is not simply to find a representative sentence that reflects the original document, but to capture the similarity and variability of the text within a given “sentence cluster.” Within the political science and legal studies communities research has focused on manual approaches to both understanding how model legislation impacts law and how policy ideas diffuse between bill text. As these studies are time consuming, there is no large-scale or broad analysis of legislative materials. Rather, researchers have limited their workload by focusing on a single topic (e.g., abortion (Patton, 2003) and crime (Kent and Carmichael, 2015)) or a single lobbying group (e.g., ALEC (Jackman, 2013). Similarly, those 1580 studying policy diffusion across US states have also limited their analysis to a few topics (e.g., same-sex marriage (Haider-Markel, 2001)). Recent attempts to automate the analysis of model legislation has had similar problems, as most researchers have limited their analysis to one interest group or a few relevant topics (HertelFernandez and Kashin, 2015; Jansa et al., 2015). Hertel-Fernandez and Kashin proposed a supervised model in which they train on hand labeled examples of state bills that borrow text and/or concepts from ALEC bills. The problem they focus on is different from ours. The motivation behind LOBBYBACK is that there exists many model legislation which we don’t have access to and the goal is to try and reconstruct these documents without labeled training data. Jansa et al. propose a technique for inferring a network of policy diffusion for manually labeled clusters of bills. Both Jansa et al. and Hertel-Fernandaz and Kashin propose techniques that only look at the problem of inferring whether two bills exhibit text reuse but unlike LOBBYBACK they do not attempt to infer whether specific policies (sentences) in the documents are similar/different. A related “dark corpora” problem, though at a far smaller scale, is the biblical “Q source” where hidden oral sources are reconstructed through textual analysis (Mournet, 2005). 3 Problem Definition Policy diffusion is a common phenomenon in state bills (Gray, 1973; Shipan and Volden, 2008; Berry and Berry, 1990; Haider-Markel, 2001). Unlike members of the (Federal) Congress, few state legislators have the expertise, time, and staff to draft legislation. It is far easier for a legislator to adapt existing legislative text than to write a bill from scratch. As a consequence, state legislatures have an increased willingness to adopt legislation drafted by interest groups or by legislators in other states (Jansa et al., 2015). In addition to states borrowing text from other legislators and lobbyists, another reason why bills can exhibit text reuse is when a new federal law passes and each state needs to modify its existing policy to conform with the new federal law. The result of legislative copying, whether caused by diffusion between states, influence from a lobby or the passing of a new federal law, is similar: a cluster of bills will share very similar text, 5 e d 4 a description draft or preliminary 5 5 e d 4 a a a a a description description draft draft or or or or preliminary preliminary Sec. Sec. Sec. Sec. Sec. Sec. Figure 1: Visualization of a multi-sentence alignment (fragment) and resulting prototype sentence. often varying only by implementation details of a given policy. The goal in constructing a prototype document–a representation of the “original” text– is to synthesize this document from the modified copies. In the case when the bill cluster was influenced by one external source, such as lobby or passage of a federal bill, the ideal prototype document would capture the language that each bill borrowed from the source document. In the case when their is not one single document that influenced a cluster of bills, the prototype will still give a summary of a concise description of the diffused text between bills, providing fast insight into what text was shared and changed within a bill cluster. 3.1 State Legislation Corpus We obtained the entire openstates.org corpus of state legislation, which includes 550,000 bills and 200,000 resolutions for all 50 states. While for some states this corpus includes data since 2007, for the majority of states we have data from 2010 on. We do not include data from Puerto Rico, where the text is in Spanish, and from Washington DC, which includes many idiosyncrasies (e.g., correspondence from city commissions introduced as bills). On average, each state introduced 10,524 bills, with an average length of 1205 words. 4 LOBBYBACK Architecture LOBBYBACK consists of 3 major components. The first component identifies clusters of bills that have text reuse. Then for each of these bill clusters, LOBBYBACK extracts and clusters the sentences from all documents. For each of the sentence clusters, LOBBYBACK synthesizes prototype sentences in order to capture the similarity and variability of the sentences in the cluster. 1581 4.1 Clustering Bills Groups of bills with significant text reuse represent candidates for analysis as they have may have all copied from the same model legislation. There are a number of ways one could identify such clusters through text mining. In our implementation, we have opted to generate a network representation of the bills and then use a network clustering (i.e., “community-finding”) algorithm to generate the bill clusters. In our network representation each node represents a state bill and weighted edges represent the degree to which two bills exhibit substantive text reuse. Since most pairs of bills do not have text reuse, we chose to use a network model because community finding algorithms work well on sparse data and do not require any parameter choice for the number of clusters. In the context of this paper, text reuse occurs when two state bills share: 1. Long passages of text, e.g. (sections of bills) that can differ in details. 2. These passages contain text of substantive nature to the topic of the bill (i.e., text that is not boilerplate). In addition to text that describes policy, state bills also contain boilerplate text that is common to all bills from a particular state or to a particular topic. Examples of legislative boilerplate include: “Read first time 01/29/16. Referred to Committee on Higher Education” (meta-data describing where the bill is in the legislative process); and “Safety clause. The general assembly hereby finds, determines, and declares ...” (a standard clause included in nearly all legislation from Colorado, stating the eligibility of a bill to be petitioned with a referendum). In order to identify pairs of bills that exhibit text reuse, we created an inverted index that contained n-grams ranging from size 4-8. We use ElasticSearch to implement the inverted index and computed the similarity between bills using the “More like This” (MLT) query (Elastic, 2016). The MLT query first selects the 100 highest scoring TF*IDF n-grams from a given document and uses those to form a search query. The MLT query is able to quickly compute the similarity between documents and since it ranks the query terms by using TF*IDF the query text is more likely to be substantive rather then boilerplate. The MLT query we used was configured to only return documents that matched at least 50% of the query’s shingles and returned at most 100 documents per query. By implementing the similarity search using a TF*IDF cutoff we were able to scale the similarity computation while still maintaining our desire to identify reuse of substantive text. The edges of the bill similarity network are computed by calculating pairwise similarity. Each bill is submitted as input for an MLT query and scored matches are returned by the search engine. Since the MLT query extracts n-grams only for the query document, the similarity function between two documents di and dj is not symmetric. We construct a symmetric bill similarity network by taking the average score of each (di, dj) and its reciprocal (dj, di). A non-existent edge is represented as an edge with score 0. We further reduce the occurrence of false-positive edges by removing all edges with a score lower than 0.1. The resulting network is very sparse, consisting of 35,346 bills that have 1 or more neighbors, 125,401 edges, and 3534 connected components that contain an average of 10 bills. A specific connected component may contain more than one bill cluster. To isolate these clusters in the bill network we use the InfoMap community detection algorithm (Rosvall and Bergstrom, 2008). We use the InfoMap algorithm because it has been to shown to be one of the best community detection algorithms and it is able to detect clusters at a finer granularity than other methods. Our corpus contains both bills that have passed and those that have not. Bills can often be reintroduced in their entirety after failing the previous year. As we do not want to bias the clusters towards bills that are re-introduced more than others, we filter the clusters such that they only include the earliest bill from each state. 4.2 Prototype Synthesis Once we have identified a cluster of bills that have likely emerged from a single “source” we would like to construct a plausible representation of that source. The prototype synthesizer achieves this by constructing a canonical document that captures the similarity and variability of the content in a given bill cluster. The two main steps in prototype synthesis consists of clustering bill sentences and generating prototype sentences from the clusters. 1582 Figure 2: An sample state bill segment from Michigan Senate Bill 571 4.2.1 Sentence Clustering Most state bills have a common structure, consisting of an introduction that describes the intent of the bill followed by sections that contain the law to be implemented. Each section of a bill is comprised of self-contained policy, usually consisting of a long sentence that describes the policy and the implementation details of that policy. Each document is segmented into these policy sentences using the standard Python NLTK sentence extractor. Sentences are cleaned by removing spurious white space characters, surrounding punctuation and lower-casing each word. Once we have extracted all of the sentences for a given bill cluster, we compute the cosine similarity between all pairs of sentences which are represented using a unigram bag-of-words model. We used a simple unweighted bag of words model because in legal text stop words can be import1 In this case, we are generating a similarity “matrix” capturing sentence–sentence similarity. Given the similarity matrix, our next goal is to isolate clusters of variant sentences that likely came from the same source sentence. We elected to use the DBSCAN (Ester et al., 1996) algorithm to generate these clusters. The DBSCAN algorithm provides us with tunable parameters that can isolate better clusters. Specifically, the parameter ϵ controls the maximum distance between any two points in the same neighborhood. By varying ϵ we are able to control both the number of clusters and the amount of sentence variation within a cluster. A second reason for selecting DBSCAN is that the algorithm automatically deals with noisy data points, placing all points that are not close enough to other points in a separate cluster labeled 1The difference between the words ”shall” and ”may” for instance is important, the former requires that a specific action be put on a states budget while the later does not “noise.” Since many sentences in a given bill cluster do not contribute to the reused text between bills, the noise cluster is useful for grouping those sentences together rather than having them be outsiders in “good” clusters. 4.2.2 Multi-Sequence Alignment Once we have sentence clusters we then synthesize a “prototype” sentence from all of the sentences in a given cluster. An ideal prototype “sentence” is one that simultaneously captures the similarity between each sentence in the cluster (the common sentence structures) and the variation between the sentences in a cluster. For a simple pair of (partial) sentences, “The Department of Motor Vehicles retains the right to ...” and “The Department of Transportation retains the right to ...”, a prototype might be of the form, “The Department of { Motor Vehicles, Transportation } retains the rights to ...” Our “sentence” is not strictly a single linear piece of text. Rather, we have a data structure that describes alternative sub-strings and captures variant text. To generate this structure we propose an algorithm that computes an approximation of the optimal multi-sentence alignment (MSA) in the cluster and then generates a prototype sentence representing a ‘consensus’ for sentences in the MSA. We generate an MSA using a modified version of the iterative pairwise alignment algorithm described in (Gusfield, 1997). The greedy algorithm builds a multi-alignment by iteratively applying the Needleman-Wunsch pairwise global alignment algorithm. Needleman-Wunsch computes the optimal pairwise alignment by maximizing the alignment score between two sentences. An alignment score is calculated based on three parameters: word matches, word mismatches (when a word appears in one sentence but not the other), and gaps (when the algorithm inserts a space in one of the sentences). An MSA is generated by the following steps: 1. Construct an edit-distance matrix for all pairs of sentences 2. Construct an initial alignment between the two sentences with the smallest edit distance 3. Repeat this step k times: (a) Select the sentence with the smallest average edit distance to the current MSA. (b) Add the chosen sentence to the MSA by aligning it to the existing MSA. 1583 The algorithm stops after the alignment has reached a size that is determined as a free parameter k (user configured, but can be chosen to be the number of sentences in the cluster). As the algorithm execution is ordered on the edit distance between the current MSA and the next sentence to be added, the larger the MSA, the more variation we are allowing in the prototype sentence. 4.2.3 Synthesizing Prototype Sentences We synthesize a prototype sentence by finding a consensus sentence from all of the aligned sentences in the MSA for a given cluster. We achieve this by going through each “column” of the MSA and using the following rules to decide which token will be used in the prototype. A token can be either a word in one of the sentences or a “space” that was inserted during the alignment process. 1. If there is a token that occurs in the majority of alignments (> 50%) then that token is chosen. 2. If no token appears in a majority, then a special variable token is constructed that displays all of the possible tokens in each sentence. For example the 1st and 3rd columns in Figure 1. 3. If a space is the majority token chosen then it is shown as a variable token with the second most common token. 5 Evaluation We provide experiments that evaluate all three components of LOBBYBACK. 5.1 Model Legislation Corpus To test the effectiveness of LOBBYBACK in recreating a model bill, we first identified a set of known model bills. Our model legislation corpus consists of 1846 model bills that we found by searching on Google using the keywords “model law”, “model policy” and “model legislation.” Most of the corpus is comprised of bills from the conservative lobby group, American Legislative Exchange Council (ALEC, 708 documents), its liberal counterpart, the State Innovative Exchange (SIX, 269 documents) and the non-partisan Council of State Governments (CSG, 470 documents), and the remainder (399) from smaller interest groups that focus on specific issues. Using the clusters we previously described (Section 4.1), we found the most similar cluster to each model bill. This was done by first computing the set of neighbors (ego-network) for a model bill using the same procedure used in creating the bill similarity network. We then matched a bill cluster to the model legislation by finding the bill cluster that had the highest Jaccard similarity (based on the overlapping bills in each cluster) with the neighbor set of a model bill. Each test example in our evaluation data set consists of model bill and its corresponding bill cluster. The total number of model legislation documents that had matches in the state bill corpus was 360 documents. Once we have an evaluation data set comprised of model bill/cluster pairs our goal is to compare the prototype sentences we infer for a cluster to the model bill that matches that cluster. Since we do not have ground truth on which sentences from the model bill match sentences in the documents that comprise the cluster we need to infer such labels. In order to identify which sentences from the model legislation actually get re-used in the bill cluster, we take the following steps: 1. Extract all sentences from each of the bills in a cluster and the sentences in the corresponding model legislation. 2. Compute the pairwise cosine similarity between bill sentences and each of the model bill sentences using the same unigram bagof-words model described in Section 4.2.1 3. Compute the “oracle” matching M∗using the Munkres algorithm (Munkres, 1957) The Munkres algorithm gives the best possible one-to-one matching between the sentences in the model legislation and the sentences in the bill clusters. There are some sentences in the model bill that are never used in actual state legislation (e.g., sentences that describe the intent of a law or instructions of how to implement a model policy). Therefore we label model bill sentences in M∗ that match a bill sentence with a score greater than 0.85 as true matches (S∗)2. The final set of 122 evaluation examples consists of all model legislation/bill cluster pairs where more than 50% of model bill sentences have true matches. 2A threshold of 0.85 was found effective in prior work (Garrett and Jansa, 2015) and we observed good separation between matching sentences and non-matches in our data-set. 1584 0.0 0.2 0.4 0.6 0.8 1.0 Threshold 0.0 0.2 0.4 0.6 0.8 1.0 Recall InfoMap Louvain 0.0 0.2 0.4 0.6 0.8 1.0 Threshold 0.0 0.2 0.4 0.6 0.8 1.0 Precision 0.0 0.2 0.4 0.6 0.8 1.0 Threshold 0.0 0.2 0.4 0.6 0.8 1.0 F1 Figure 3: Precision, Recall, and F1 scores for the bill clustering component of LOBBYBACK configured with both the Louvain and InfoMap clustering algorithms. 5.2 Baselines While no specific baseline exists for our problem, we implemented two alternatives to test against. The first, Random-Baseline, was implemented simply to show the performance of randomly constructing prototype documents. The second, LexRank-Baseline, implements a popular extractive summarization method. Random-Baseline – The random-baseline randomly samples sentences from a given bill cluster. The number of sentences it samples is equal to the number of sentences in the optimal matching |M∗| LexRank-Baseline – The LexRank baseline uses the exact same clustering algorithm as LOBBYBACK except instead of synthesizing prototype sentences, it uses the LexRank algorithm (Erkan and Radev, 2004) to pick the most salient sentence from each of the sentence clusters. 5.3 Evaluating Bill Clusters As described in Section 5.1, each model bill is associated with a set of bills that comprise its neighbors in the bill network. In order to evaluate how LOBBYBACK clusters bills we compare the inferred clusters to the corresponding neighbor set for each model bill. The inferred cluster for a given model bill can exhibit false positive and false negative errors. False negatives, in which a bill that exists in the neighbor set of a model bill but is not contained in the inferred cluster, are easy to identify (allowing us to calculate recall). Precision, on the other hand, is more difficult as any “extra” bills in the inferred cluster are not necessarily incorrect. It is possible that there are bills which do exhibit text re-use with the model legislation but did not match via the ElasticSearch query used to construct the network. We have found that in practice the second component of LOBBYBACK, which clusters the sentences extracted from a bill cluster, is robust to false-positives due to DBSCAN’s treatment of “noisy” data points. Because of this, we are more concerned with recall in the bill clustering module as any data lost in this step propagates through the rest of the system. Figure 3 shows the precision, recall, and F1 scores for LOBBYBACK coupled with both the Louvain (Blondel et al., 2008) and InfoMap (Rosvall and Bergstrom, 2008) network clustering algorithms. Because we do not know with absolute certainty which state bills are derived from model legislation, we would like to test our approach at different levels of confidence. To do this we vary the threshold on the similarity scores (edge weights) of the ego-network determined for each model legislation. A normalized weight is computed by taking the score provided by ElasticSearch and dividing that edge weight weight by the maximum edge weight in the neighbor set (to control for the unbounded scores provided by ElasticSearch). We vary the threshold for this weight (ranging from 0 to 1) and calculate the precision, recall, and F1 on a neighborhood set that is comprised of all bills that have a weight greater than the threshold. Higher threshold values means fewer state bills are included in the set, but they are increasingly similar to the model bill. As shown in Figure 3, recall stays high for the majority of threshold values. This indicates that both clustering algorithms do a good job at recovering the bills that are the most similar to the model legislation. However, the two clustering algorithms differ somewhat in precision. Louvain, in particular performs worse, as it suffer from “resolution” problems. While effective for networks with large clusters, Louvain can not isolate small groups, of which we have many. For this reason, 1585 Figure 4: Precision, Recall, and F1 scores of LOBBYBACK and baselines. we chose InfoMap as the method to use for the final implementation of LOBBYBACK. 5.4 Evaluating Sentence Clusters We first evaluate the quality of the sentence clusters using the optimal matching M∗described above. For each test example in the evaluation set we generate a prototype document using LOBBYBACK and each of the baselines described above. We then compute a matching M between the prototype sentences and the model bill sentences (using the same procedure described in 5.1), where S is the set of sentences in the prototype and S0.85 is the set of sentences that match with a score greater than 0.85. We compute precision, P, as P = |S0.85|/|S|, and recall, R, as R = |S|/|S∗|. Figure 4 shows precision, recall and F1 scores for both baselines and LOBBYBACK. Each curve is generated by averaging the precision/recall/F1 scores computed for each of the examples in the test set. The x-axis represents the minimum bill cluster size of the test examples for which the score is computed. For example, a minimum cluster size would average over all test examples with at least 2 bills in a cluster. LOBBYBACK relies on the fact that if text was borrowed from a model bill, then it would have been borrowed by many of the bills in the cluster. By analyzing how LOBBYBACK performs with respect to the minimum cluster size, we can determine how much evidence LOBBYBACK needs in order to construct clusters that correspond to sentences in the model bills. While the performance of LOBBYBACK and the LexRank baseline substantially improves over the random baseline, the different between LOBBYBACK and LexRank for this task is negligible. Since our cut-off similarity is 0.85, all sentences above the threshold are treated as true positives, making the distinction between the LexRank baseline and system small. LOBBYBACK performs a little worse than LexRank for large cluster sizes because it is penalized for having space and variable tokens which don’t occur in model bills. Space and variable tokens occur more frequently in prototype sentences in larger clusters because there is more variation in the sentence clusters. 5.5 Evaluating Sentence Synthesis The experiment in the previous section evaluated the quality of the sentence clusters by treating all matching sentences with a similarity greater than 0.85 as true positives. Here we provide an evaluation of the synthesized sentences that LOBBYBACK generates and compare them to the LexRank baseline, which chooses the most salient sentence from each cluster. We evaluate the quality of the synthesized prototype sentences by computing the word-based edit-distance between the prototype sentence with its corresponding model bill sentence in S for each test example. Since the prototypes contain variable and space tokens which do not occur in the model bill sentences we modify the standard edit distance algorithm by not penalizing space tokens and allowing for any of the tokens that comprise a variable to be positive matches. In addition, we remove punctuation and lowercase all words in all of the sentences, regardless of method. We generate the results in Table 1 by averaging the edit distance for a configuration of LOBBYBACK or LexRank over sentence clusters produced for each test example. LOBBYBACK was configured to run with the number of iterations set to the size of the sentence cluster. We compared both the performance of LOBBYBACK and LexRank for DBSCAN ϵ values of 0.1 and 0.15 as well as computing the average edit distance for different minimum sizes of cluster values. As the table shows, LOBBYBACK obtains a lower edit distance than LexRank in every config1586 uration and as the size of the clusters increase the gap between the two increases. The goal of LOBBYBACK is not to be a better summarization algorithm than LexRank. By comparing to LexRank and showing that the edit distances are smaller on average, we can conclude that the prototype sentences created by LOBBYBACK are capturing the text that is “central” or similar within a given cluster. In addition, the prototype sentences produced by LOBBYBACK are superior because they also capture and describe in a succinct way, the variability of the sentences within a cluster. 6 Discussion One assumption that we made about the nature of state adoption of model legislation is that the legislatures make modifications that largely preserve the model language in an effort to preserve policy. However, we currently do not consider cases in which a legislature has intentionally obscured the text while still retaining the same meaning. While not as frequent as common text reuse, Hertel-Fernandez and Kashin (2015) observed that some legislatures almost completely changed the text while reusing the concepts. One area of future work would be to try and extend LOBBYBACK to be more robust to these cases. One strategy would be to allow for a more flexible representation of text, such as word vector embeddings. The embeddings might even be used to extend the multisentence alignment to include a penalty based on word distance in embedding space. LOBBYBACK performs well on reconstructing model legislation from automatically generated bill clusters. However, there are a number of improvements that can refine part of the pipeline. A potential change, but one that is more computationally costly, would be to use a deeper parsing of the sentences that we extract from the documents. We used a simple unigram model when computing sentence similarities because we wanted to ensure that stop words were included–due to their importance in legal text. We suspect that by using a parser we could, for example, weight the similarity of noun-phrases, yielding a better similarity matrix and potentially higher precision/recall. Currently LOBBYBACK does not consider the order of the sentences when attempting to construct a prototype document. We envision a future version of LOBBYBACK that tries to reconstruct the original ordering of the prototype sentences. Min Cluster Size Method ϵ 2 4 6 8 LOBBYBACK 0.1 24.4 20.4 18.2 17.2 LexRank 0.1 25.4 22.5 20.3 19.4 LOBBYBACK 0.15 25.5 21.6 19.4 17.9 LexRank 0.15 27.3 25.6 25.0 24.1 Table 1: Mean edit distance scores for LOBBYBACK and LexRank. 7 Conclusion In this paper we present LOBBYBACK, a system to reconstruct the “dark corpora” that is comprised of model bills which are copied (and modified) by resource constrained state legislatures. LOBBYBACK first identifies clusters of text reuse in a large corpora of state legislation and then generates prototype sentences that summarizes the similarity and variation of the copied text in a bill cluster. We believe that by open-sourcing LOBBYBACK and releasing our data of prototype bills to the public, journalists and legal scholars can use our findings to better understand the origination of U.S state laws. Acknowledgements The authors would like to thank Mike Cafarella, Joe Walsh and Rayid Ghani for helpful discussions, the Sunlight Foundation for providing access to data and the anonymous reviewers for their helpful comments. References Frances Stokes Berry and William D. Berry. 1990. State lottery adoptions as policy innovations: An event history analysis. American Political Science Review, 84:395–415, 6. Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10):P10008. Xiaoyan Cai and Wenjie Li. 2013. Ranking through clustering: An integrated approach to multi-document summarization. IEEE Transactions on Audio, Speech, and Language Processing, 21(7):1424–1433. Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015. Ranking with recursive neural networks and its application to multi-document summarization. In AAAI Conference on Artificial Intelligence, AAAI’15, pages 2153–2159. AAAI Press. 1587 Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’98, pages 335–336. Elastic. 2016. Elasticsearch reference. https://www.elastic.co/guide/en. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22(1):457–479, December. Martin Ester, Hans-Peter Kriegel, Jrg Sander, and Xiaowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In Knowledge Discovery and Data Mining, KDD’96, pages 226–231. AAAI Press. Kristin N. Garrett and Joshua M. Jansa. 2015. Interest group influence in policy diffusion networks. State Politics & Policy Quarterly, 15(3):387–417. Dan Gillick, Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tur. 2009. A global optimization framework for meeting summarization. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP ’09, pages 4769–4772, Washington, DC, USA. IEEE Computer Society. Virginia Gray. 1973. Innovation in the states: A diffusion study. American Political Science Review, 67(04):1174–1185. Dan Gusfield. 1997. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press, New York, NY, USA. Donald P Haider-Markel. 2001. Policy diffusion as a geographical expansion of the scope of political conflict: Same-sex marriage bans in the 1990s. State Politics & Policy Quarterly, 1(1):5–26. Alexander Hertel-Fernandez and Konstantin Kashin. 2015. Capturing business power across the states with text reuse. In Annual Conference of the Midwest Political Science Association, pages 16–19. Molly Jackman. 2013. Alecs influence over lawmaking in state legislatures. Brookings Institute Blog, December 6. Joshua M Jansa, Eric R Hansen, and Virginia H Gray. 2015. Copy and paste lawmaking: The diffusion of policy language across american state legislatures. Working Paper. Stephanie L Kent and Jason T Carmichael. 2015. Legislative responses to wrongful conviction: Do partisan principals and advocacy efforts influence statelevel criminal justice policy? Social science research, 52:147–160. Terence C Mournet. 2005. Oral tradition and literary dependency: Variability and stability in the synoptic tradition and Q, volume 195. Mohr Siebeck. J. Munkres. 1957. Algorithms for the assignment and transportation problems. Journal of the Society of Industrial and Applied Mathematics, 5(1):32–38, March. Dana Jill Patton. 2003. The Effect of U.S. Supreme Court Intervention on the Innovation and Diffusion of Post-Roe Abortion Policies in the American States, 1973-2000. Ph.D. thesis, University of Kentucky. Vahed Qazvinian and Dragomir R. Radev. 2008. Scientific paper summarization using citation summary networks. In International Conference on Computational Linguistics, COLING ’08, pages 689–696, Stroudsburg, PA, USA. Association for Computational Linguistics. Martin Rosvall and Carl T. Bergstrom. 2008. Maps of random walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences, 105(4):1118–1123. Charles R Shipan and Craig Volden. 2008. The mechanisms of policy diffusion. American Journal of Political Science, 52(4):840–857. Xiaojun Wan and Jianwu Yang. 2008. Multi-document summarization using cluster-based link analysis. In ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’08, pages 299–306. Lu Wang and Claire Cardie. 2013. Domainindependent abstract generation for focused meeting summarization. In Annual Meeting of the Association for Computational Linguistics, ACL’13, pages 1395–1405. Wen-tau Yih, Joshua Goodman, Lucy Vanderwende, and Hisami Suzuki. 2007. Multi-document summarization by maximizing informative content-words. In International Joint Conference on Artifical Intelligence, IJCAI’07, pages 1776–1782. 1588
2016
149
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 150–160, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Generalized Transition-based Dependency Parsing via Control Parameters Bernd Bohnet, Ryan McDonald, Emily Pitler and Ji Ma Google Inc. {bohnetbd,ryanmcd,epitler,maji}@google.com Abstract In this paper, we present a generalized transition-based parsing framework where parsers are instantiated in terms of a set of control parameters that constrain transitions between parser states. This generalization provides a unified framework to describe and compare various transitionbased parsing approaches from both a theoretical and empirical perspective. This includes well-known transition systems, but also previously unstudied systems. 1 Introduction Transition-based dependency parsing is perhaps the most successful parsing framework in use today (Nivre, 2008). This is due to the fact that it can process sentences in linear time (Nivre, 2003); is highly accurate (Zhang and Nivre, 2011; Bohnet and Nivre, 2012; Weiss et al., 2015); and has elegant mechanisms for parsing non-projective sentences (Nivre, 2009). As a result, there have been numerous studies into different transition systems, each with varying properties and complexities (Nivre, 2003; Attardi, 2006; Nivre, 2008; Nivre, 2009; G´omez-Rodr´ıguez and Nivre, 2010; Choi and Palmer, 2011; Pitler and McDonald, 2015). While connections between these transition systems have been noted, there has been little work on developing frameworks that generalize the phenomena parsed by these diverse systems. Such a framework would be beneficial for many reasons: It would provide a language from which we can theoretically compare known transition systems; it can give rise to new systems that could have favorable empirical properties; and an implementation of the generalization allows for comprehensive empirical studies. In this work we provide such a generalized transition-based parsing framework. Our framework can be cast as transition-based parsing as it contains both parser states as well as transitions between these states that construct dependency trees. As in traditional transition-based parsing, the state maintains two data structures: a set of unprocessed tokens (normally called the buffer); and a set of operative tokens (often called the stack). Key to our generalization is the notion of active tokens, which is the set of tokens in which new arcs can be created and/or removed from consideration. A parser instantiation is defined by a set of control parameters, which dictate: the types of transitions that are permitted and their properties; the capacity of the active token set; and the maximum arc distance. We show that a number of different transition systems can be described via this framework. Critically the two most common systems are covered – arc-eager and arc-standard (Nivre, 2008). But also Attardi’s non-projective (Attardi, 2006), Kuhlmann’s hybrid system (Kuhlmann et al., 2011), the directed acyclic graph (DAG) parser of Sagae and Tsujii (2008), and likely others. More interestingly, the easy-first framework of Goldberg and Elhadad (2010) can be described as an arc-standard system with an unbounded active token capacity. We present a number of experiments with an implementation of our generalized framework. One major advantage of our generalization (and its implementation) is that it allows for easy exploration of novel systems not previously studied. In Section 5 we discuss some possibilities and provide experiments for these in Section 6. 2 Related Work Transition-based dependency parsing can be characterized as any parsing system that maintains a 150 state as well as a finite set of operations that move the system from one state to another (Nivre, 2008). In terms of modern statistical models that dominate the discourse today, the starting point is likely the work of Kudo and Matsumoto (2000) and Yamada and Matsumoto (2003), who adopted the idea of cascaded chunking from Abney (1991) in a greedy dependency parsing framework. From this early work, transition-based parsing quickly grew in scope with the formalization of the arc-eager versus arc-standard paradigms (Nivre, 2003; Nivre, 2008), the latter largely being based on well-known shift-reduce principles in the phrase-structure literature (Ratnaparkhi, 1999). The speed and empirical accuracy of these systems – as evident in the widely used MaltParser software (Nivre et al., 2006a) – led to the study of a number of different transition systems. Many of these new transition systems attempted to handle phenomena not covered by arc-eager or arc-standard transition systems, which inherently could only produce projective dependency trees. The work of Attardi (2006), Nivre (2009), G´omez-Rodr´ıguez and Nivre (2010), Choi and Palmer (2011), and Pitler and McDonald (2015) derived transition systems that could parse nonprojective trees. Each of these systems traded-off complexity for empirical coverage. Additionally, Sagae and Tsujii (2008) developed transition systems that could parse DAGs by augmentating the arc-standard and the arc-eager system. Bohnet and Nivre (2012) derived a system that could produce both labeled dependency trees as well as part-ofspeech tags in a joint transition system. Taking this idea further Hatori et al. (2012) defined a transition system that performed joint segmentation, tagging and parsing. In terms of empirical accuracy, from the early success of Nivre and colleagues (Nivre et al., 2006b; Hall et al., 2007; Nivre, 2008), there has been an succession of improvements in training and decoding, including structured training with beam search (Zhang and Clark, 2008; Zhang and Nivre, 2011), incorporating graph-based rescoring features (Bohnet and Kuhn, 2012), the aformentioned work on joint parsing and tagging (Bohnet and Nivre, 2012), and more recently the adoption of neural networks and feature embeddings (Chen and Manning, 2014; Weiss et al., 2015; Dyer et al., 2015; Alberti et al., 2015). In terms of abstract generalizations of transition systems, the most relevant work is that of G´omez-Rodr´ıguez and Nivre (2013) – which we abbreviate GR&N13. In that work, a generalized framework is defined by first defining a set of base transitions, and then showing that many transitionbased systems can be constructed via composition of these base transitions. Like our framework, this covers common systems such as arc-eager and arcstandard, as well as easy-first parsing. In particular, arc construction in easy-first parsing can be seen as an action composed of a number of shifts, an arc action, and a number of un-shift actions. The primary conceptual difference between that work and the present study is the distinction between complex actions versus control parameters. In terms of theoretical coverage, the frameworks are not equivalent. For instance, our generalization covers the system of Attardi (2006), whereas GR&N13 cover transition systems where multiple arcs can be created in tandem. In Section 7 we compare the two generalizations. 3 Generalized Transition-based Parsing A transition system must define a parser state as well as a set of transitions that move the system from one state to the next. Correct sequences of transitions create valid parse trees. A parser state is typically a tuple of data structures and variables that represent the dependency tree constructed thus far and, implicitly, possible valid transitions to the next state. In order to generalize across the parser states of transition-based parsing systems, we preserve their common parts and make the specific parts configurable. In order to generalize across the transitions, we divide the transitions into their basic operations. Each specific transition is then defined by composition of these basic operations. By defining the properties of the data structures and the composition of the basic operations, different transition systems can be defined and configured within one single unified system. As a consequence, we obtain a generalized parser that is capable of executing a wide range of different transition systems by setting a number of control parameters without changing the specific implementation of the generalized parser. 3.1 Basic Notation In the following, we use a directed unlabeled dependency tree T = ⟨V, A⟩for a sentence x = 151 This is an example with two arcs (a) Arc-standard: is and example are eligible for arcs. This is an example with two arcs (b) Arc-eager: example and with are eligible for arcs. This is an example with two arcs (c) Easy-first: All unreduced tokens are active (bolded). Figure 1: A partially processed dependency tree after having just added the arc (example, an) in the arc-standard, arc-eager, and easy-first systems. Tokens in the operative token set O are shaded orange, while tokens in the unordered buffer U are in an unshaded box. The bolded tokens are in ACTIVE(O) and eligible for arcs (Section 3.4). w1, ..., wn, where Vx = {1, ..., n} and V r x = Vx ∪{r}, Ax ⊂V r x × Vx. r is a placeholder for the root node and set either to 0 (root to the left of the sentence) or n + 1 (root to the right). The definition is mostly equivalent to that of K¨ubler et al. (2009) but deviates in the potential handling of the root on the right (Ballesteros and Nivre, 2013). The set of nodes Vx index the words of a sentence x and V r x includes in addition the artificial root node r. Let A be the arc set, i.e., (i, j) ∈A iff there is a dependency from i to j. We use as alternative notation (i →j). This is the arc set the algorithm will create. For ease of exposition, we will only address unlabeled parsing. However, for our experiments we do implement a labeled parsing variant using the standard convention of composing arc transitions with corresponding arc labels. 3.2 Generalized Parser State Let U be an unordered set of buffered unprocessed tokens. This set is identical to the buffer from transition-based parsing. Following standard notation, we will use i|U to indicate that i is the leftmost element of the set. Let O be an ordered set of operative tokens. Specifically, O is the set of tokens that 1) have been moved out of U, and 2) are not themselves reduced. The set O is similar in nature to the traditional stack of transition-based parsing, but is not restricted to stack operations. As in transitionTransitions “Adds an arc from j to i, both ∈ACTIVE(O).” ←i,j (O, U, A) ⇒(O, U, A ∪(j →i)) “Adds an arc from i to j, both ∈ACTIVE(O).” →i,j (O, U, A) ⇒(O, U, A ∪(i →j)) “Removes token i ∈ACTIVE(O) from O.” −i (O...i..., U, A) ⇒(O, U, A) ”Moves the top token from U to the top of O.” + (O, i|U, A) ⇒(O|i, U, A) Figure 2: Base generalized transitions over parser states. based parsing we will use the notation O|i to indicate that i is the rightmost element of the set; O[n] is the n-th rightmost element of the set. Figure 1 shows the set of tokens in O within shaded boxes and U within unshaded boxes. 3.3 Generalized Transitions The above discussion describes what a generalized transition-based parser state looks like; it does not describe any transitions between these states, which is the core of transition-based parsing. In this section we present a set of basic transitions, which themselves can be composed to make more complex transitions (similar to G´omez-Rodr´ıguez and Nivre (2013)). Let T = {←i,j, →i,j, −i, +} be the set of basic transitions (Figure 2), which have analogues to standard transition-based parsing. These transitions come with the standard preconditions, i.e., a root cannot be a modifier; each token can modify at most one word1; a token can only be reduced if it has a head; and a shift can only happen if the unoperative buffer is non-empty. We will often refer to these as LEFT-ARC (←i,j), RIGHT-ARC (→i,j), REDUCE (−i), and SHIFT (+). We additionally refer to LEFT-ARC and RIGHT-ARC together as arccreation actions. 3.4 Control Parameters Instantiations of a transition system are defined via control parameters. We defined two sets of such parameters. The first, we call global parameters, dictates system wide behaviour. The second, we call transition parameters, dictates a specific behaviour for each transition. 1This precondition is not needed for DAG transition systems. 152 Global Control Parameters We have two parameters for the broader behaviour of the system. 1. Active token capacity K. The active set of tokens ACTIVE(O) that can be operated on by the transitions is the set {O[min(|O|, K)], ..., O[1]}. K additionally determines the size of O at the start of parsing. E.g., if K = 2, then we populate O with the first two tokens. This is equivalent to making SHIFT deterministic while |O| < K. 2. Max arc distance D. I.e., arcs can only be created between two active tokens O[i] and O[j] if |i −j| ≤D. Transition Control Paramters Let M(T ) be a multiset of transitions, such that if t ∈M(T ), then t ∈T . Note that M(T ) is a multiset, and thus can have multiple transitions of the same type. For each t ∈M(T ), our generalization requires the following control parameters to be set (default in bold): 1. Bottom-up: B ∈{[t]rue, [f]alse}. Whether creating an arc also reduces it. Specifically we will have two parameters, BL and BR, which specify whether LEFT/RIGHT-ARC actions are bottom up. We use the notation and say B = true to mean BL = BR = true. For example BL = true indicates that ←i,j is immediately followed by a reduce −i. 2. Arc-Shift: S ∈{[t]rue, [f]alse}. Whether creating an arc also results in SHIFT. Specifically we will have two parameters, SL and SR, which specify whether LEFT/RIGHTARC actions are joined with a SHIFT. We use the notation and say S = true to mean SL = SR = true. For example SL = true indicates that ←i,j is immediately followed by +. 3. Periphery: P ∈{[l]eft, [r]ight, na}. If a transition must operate on the left or right periphery of the active token set ACTIVE(O). For arc-creation transitions, this means that at least one of the head or modifier is on the specified periphery. If the value is na, that means that the action is not constrained to be on the periphery. Note, that when K ≤2, all arc-creation actions by default are on the periphery. Each of these control parameters has a default value, which will be assumed if unspecified. Note that the relevance of these parameters is transition dependent. E.g., a SHIFT requires no such control parameters and a REDUCE needs neither B nor S. These control parameters allow limited compositionality of the basic transitions in Figure 2. Unlike G´omez-Rodr´ıguez and Nivre (2013), each transition includes at most one SHIFT, at most one REDUCE, and at most one LEFT-ARC or RIGHTARC. I.e., the most compositional transition is a LEFT/RIGHT-ARC with a REDUCE and/or a SHIFT. Even with this restriction, all of the transition systems covered by G´omez-Rodr´ıguez and Nivre (2013) can still be expressed in our generalization. 3.5 Generalized Transition System To summarize, a generalized transition is defined as follows: 1. A parser state: Γ = ⟨O, U, A⟩. 2. A set of basic transitions: T = {←i,j, →i,j, −i, +}. And each transition system instantiation must further define: 1. Values for global control parameters: K and D. 2. A multiset of valid transitions M(T ), where ∀t ∈M(T ), then t ∈T . 3. For each t ∈M(T ) values for B, S, and P. 4 Transition System Instantiations The instantiation of the transition system consists of setting the capacity K and distance D as well as the transition control parameters. In order to make the comparison clearer, we will define typical transition systems using the notation of Kuhlmann et al. (2011). Here, a parser state is a triple (σ, β, A), where σ|i is a stack with top element i, j|β is a buffer whose next element is j, and A the set of created arcs. To make notation cleaner, we will drop indexes whenever the context makes it clear. E.g., if the active token capacity is 2 (K = 2), then necessarily for ←i,j, i = 2 and j = 1 and we can write ←. When K > 2 or D > 1, a base arc-creation action can be instantiated into multiple transitions that only differ by the indexes. E.g., when K = 3 and D = 2, ←with P = na can have three instantiations: ←3,2, ←2,1 and 153 ←3,1. To keep exposition compact, in such circumstances we use ⋆to denote the set of indexpairs allowed by the given K, D and P values. 4.1 Arc-standard Arc-standard parsing is a form of shift-reduce parsing where arc-creations actions happen between the top two elements on the stack: LEFT-ARC: (σ|i|j, β, A) ⇒ (σ|j, β, A ∪(j →i)) RIGHT-ARC: (σ|i|j, β, A) ⇒ (σ|i, β, A ∪(i →j)) SHIFT: (σ, i|β, A) ⇒(σ|i, β, A) This is easily handled in our generalization by having only two active tokens. ←and →actions with default parameter values are used to simulate LEFT-ARC and RIGHT-ARC respectively, and + is used to shift tokens from U to O. M(T ) base parameter values LEFT-ARC ← {default} RIGHT-ARC → {default} SHIFT + capacity K 2 arc distance D 1 Note that when K = 2, arc-creations are by definition always on the periphery. 4.2 Arc-eager Arc-eager transition systems have been described in various ways. Kuhlmann et al. (2011) defines it as operations between tokens at the top of the stack and front of the buffer. LEFT-ARC: (σ|i, j|β, A) ⇒ (σ, j|β, A ∪(j →i)) RIGHT-ARC: (σ|i, j|β, A) ⇒ (σ|i|j, β, A ∪(i →j)) REDUCE: (σ|i, β, A) ⇒(σ, β, A) SHIFT: (σ, i|β, A) ⇒(σ|i, β, A) In our generalization, we can simulate this by having two active tokens on the operative set, representing the top of the stack and front of the buffer. SHIFT and LEFT-ARC are handled in the same way as in the arc-standard system. The RIGHT-ARC action is simulated by →with modified parameters to account for the fact that the action keeps the modifier in O and also shifts a token from U to O. The REDUCE action is handled by −with the periphery set to left so as to remove the second rightmost token from O. M(T ) base parameter values LEFT-ARC ← {default} RIGHT-ARC → {B = f, S = t} REDUCE − {P = l} SHIFT + K 2 D 1 Note that the artificial root must be on the right of the sentence to permit the reduce to operate at the left periphery of the active token set. 4.3 Easy-first For easy-first parsing (Goldberg and Elhadad, 2010), the number of active tokens is infinite or, more precisely, equals to the number of tokens in the input sentence, and arc-creation actions can happen between any two adjacent tokens. M(T ) base parameter values LEFT-ARC ←⋆ {default} RIGHT-ARC →⋆ {default} K ∞ D 1 Note here ⋆denotes the set of indexes {(i, j)|i ≤ |O|, j ≥1, i = j + 1}. Thus, at each time step there are 2∗|O| actions that need to be considered. Additionally, reduce is always composed with arccreation and since K = ∞, then there is never a SHIFT operation as O is immediately populated with all tokens on the start of parsing. 4.4 Kuhlmann et al. (2011) Kuhlmann et al. present a ‘hybrid’ transition system where the RIGHT-ARC action is arc-standard in nature, but LEFT-ARC actions is arc-eager in nature, which is equivalent to the system of Yamada and Matsumoto (2003). We can get the same effect as their system by allowing three active tokens, representing the top two tokens of the stack and the front of the buffer. Transitions can be handled similarly as the arc-standard system where only the periphery parameter need to be changed accordingly. This change also requires the root to be on the right of the sentence. M(T ) base parameter values LEFT-ARC ← {P = r} RIGHT-ARC → {P = l} SHIFT + K 3 D 1 154 4.5 Sagae and Tsujii (2008) Sagae and Tsuji present a model for projective DAG parsing by modifying the LEFT-ARC and RIGHT-ARC transitions of the arc-eager system. In their system, the LEFT-ARC transition does not remove the dependent, and RIGHT-ARC transition does not shift any token from the input to the stack. LEFT-ARC: (σ|i, j|β, A) ⇒ (σ|i, j|β, A ∪(j →i)) RIGHT-ARC: (σ|i, j|β, A) ⇒ (σ|i, j|β, A ∪(i →j)) REDUCE: (σ|i, β, A) ⇒(σ, β, A) SHIFT: (σ, i|β, A) ⇒(σ|i, β, A) This can be easily simulated by modifying arceager system mentioned in Sec. 4.2 such that both the LEFT-ARC and RIGHT-ARC transition keep the stack/buffer untouched. M(T ) base parameter values LEFT-ARC ← {B = f} RIGHT-ARC → {B = f} REDUCE − {P = l} SHIFT + K 2 D 1 4.6 Attardi (2006) Now we show how our framework can extend to non-projective systems. This is primarily controlled by the arc-distance parameter D. The base of the Attardi non-projective transition system is the arc-standard system. Attardi modifies RIGHT/LEFT-ARC actions that operate over larger contexts in the stack. For simplicity of exposition below we model the variant of the Attardi system described in Cohen et al. (2011). RIGHT-ARCN: (σ|i1+N| . . . |i2|i1, β, A) ⇒ (σ|iN| . . . |i2|i1, β, A ∪(i1 →i1+N)) LEFT-ARCN: (σ|i1+N| . . . |i2|i1, β, A) ⇒ (σ|i1+N| . . . |i2, β, A ∪(i1+N →i1)) SHIFT: (σ, i|β, A) ⇒(σ|i, β, A) Conceptually the Attardi system can be generalized to any value of N2, and Attardi specifically allows N=1,2,3. Actions that create arcs between non-adjacent tokens permit limited nonprojectivity. Thus, for Attardi, we set the number of active tokens to N+1, to simulate the top N+1 tokens of the stack, and set the max distance D to N to 2Attardi (2006) also introduces Extract/Insert actions to a temporary buffer that he argues generalizes to all values of N. We don’t account for that specific generalization here. indicate that tokens up to N positions below the top of the stack can add arcs with the top of the stack. M(T ) base parameter values LEFT-ARC ←⋆ {P = r} RIGHT-ARC →⋆ {P = r} K N + 1 D N (3 for Attardi (2006)) The critical parameter for each arc action is that P = r. This means that the right peripheral active token always must participate in the action, as does the right-most token of the stack for the original Attardi. Here ⋆denotes {(i, j)|j = 1, i −j ≤D}. 5 Novel Transition Systems Any valid setting of the control parameters could theoretically define a new transition system, however not all such combinations will be empirically reasonable. We outline two potential novel transition systems suggested by our framework, which we will experiment with in Section 6. This is a key advantage of our framework (and implementation) – it provides an easy experimental solution to explore novel transition systems. 5.1 Bounded Capacity Easy-first Easy-first and arc-standard are similar since they are both bottom-up and both create arcs between adjacent nodes. The main difference lies in the capacity K, which is 2 for arc-standard and ∞for easy-first. In addition, the shift action is needed by the arc-standard system. Each system has some advantages: arc-standard is faster, and somewhat easier to train, while easy-first can be more accurate (under identical learning settings). Seen this way, it is natural to ask: what happens if the active token range K is set to k, with 2 < k < ∞? We explore various values in the region between arc-standard and easy-first in Section 6. M(T ) base parameter values LEFT-ARC ←⋆ {default} RIGHT-ARC →⋆ {default} SHIFT + K k D 1 5.2 Non-projective Easy-first A simple observation is that by allowing D > 1 makes any transition system naturally non155 projective. One example would be a nonprojective variant of easy-first parsing: M(T ) base parameter values LEFT-ARC ←⋆ {default} RIGHT-ARC →⋆ {default} K ∞ D any value of N Here N denotes the maximum arc-creation distance. We also note that one could potentially vary both K and D simultaneously, giving a nonprojective limited capacity easy-first system. 6 Implementation and Experiments Our implementation uses a linear model P yi∈y w· f(x, y1, . . . , yi) to assign a score to a sequence y = y1, y2, . . . ym of parser transitions, given sentence x. Model parameters are trained using the structured perceptron with “early update” (Collins and Roark, 2004) and features follow that of Zhang and Nivre (2011). For the arc-standard and arc-eager transition systems, we use the static oracle to derive a single gold sequence for a given sentence and its gold tree. For systems where there is no such static oracle, for example the easy-first system, we use the method proposed by Ma et al. (2013) to select a gold sequence such that, for each update, the condition w · f(x, ˆyk) < w · f(x, yk) always holds, which is required for perceptron convergence. Here ˆyk denotes the length k prefix of a correct sequence and yk denotes the highest scoring sequence in the beam. We carry out the experiments on the Wall Street Journal using the standard splits for the training set (section 2-21), development set (section 22) and test set (section 23). We converted the constituency trees to Stanford dependencies version 3.3.0 (de Marneffe et al., 2006). We used a CRFbased Part-of-Speech tagger to generate 5-fold jack-knifed Part-of-Speech tag annotation of the training set and used predicted tags on the development and test set. The tagger reaches accuracy scores similar to the Stanford tagger (Toutanova et al., 2003) with 97.44% on the test set. The unlabeled and labeled accuracy scores exclude punctuation marks. Obviously, there are many interesting instantiations for the generalized transition system. In particular, it would be interesting to investigate parsing performance of systems with different active 0 10 20 30 40 50 60 70 88 89 90 91 92 K UAS LAS Figure 3: Labeled/unlabeled attachment scores with respect to the active capacity K. token size and arc-distance. Before we investigate these system in the next subsections, we present the performance on standard systems. 6.1 Common Systems The results for arc-standard, arc-eager and easyfirst (bottom of Table 1) show how standard systems perform within our framework. Easy-first’s labeled attachment score (LAS) is 0.46 higher than the LAS of arc-eager when using the same feature set. These results are competitive with the stateof-the-art linear parsers, but below recent work on neural network parsers. A future line of work is to adopt such training into our generalization. System UAS LAS b Dyer et al. (2015) 93.10 90.90 – Weiss et al. (2015) 93.99 92.05 – Alberti et al. (2015) 94.23 92.23 8 Zhang and Nivre (2011)⋆ 92.92 90.88 32 Zhang and McDonald (2014) 93.22 91.02 – Arc-standard (gen.) 92.81 90.68 32 Arc-eager (gen.) 92.88 90.73 32 Easy-first (gen.) 93.31 91.19 32 Table 1: State-of-the-art comparison. ⋆denotes our own re-implementation. The systems in the first block on the top use neural networks. 6.2 Bounded Capacity Easy-first Table 1 shows that easy-first is more accurate than arc-standard. However, it is also more computationally expensive. By varying the number of active tokens, we can investigate whether there is a sweet spot in the accuracy vs. speed trade-off. Figure 3 shows labeled/unlabeled accuracy scores on the development set for active token 156 sizes ranging from 2 to 32, all with beam size 1. Different from the original easy-first system where all tokens are initialized as active tokens, in the bounded capacity system, a token can be active only after it has been shifted from U to O. We observe that the accuracy score increases remarkably by over a point when active token capacity gets increased from 2 to 3, and peaks at a active token capacity of 4. Generally, more active tokens allows the parser to delay “difficult” decisions to later steps and choose the “easy” ones at early steps. Such behavior has the effect of limiting the extent of error propagation. The result also suggests that a modification as simple as adding one more active token to the arc-standard system can yield significant improvement. With a larger active token capacity, we see a slight drop of accuracy. This is likely related to the parser having to predict when to perform a shift transition. In comparison, the vanilla easyfirst parser does not need to model this. 6.3 Non-projective Easy-first For the experiments with non-projective easy-first, we use the Dutch and Danish CoNLL 2006 corpora. To assess the performance, we applied the evaluation rules of the 2006 shared task. In order to make the non-projective systems perform well, we added to all feature templates the arc-distance D. In these experiments, we included in the training an artificial root node on the right since Dutch as well a few sentences of the Danish corpus have more than one root node. In the experiments, we use the easy-first setting with infinite set of active tokens K and increase stepwise the arc-distance D. For training, we filter out the sentences which contain non-projective arcs not parseable with the selected setting. Table 2 provides an overview of the performance with increasing arc-distance. At the bottom of the table, we added accuracy scores for the bounded capacity non-projective easy-first parser since we think these settings provide attractive trade-offs between accuracy and complexity. The original easy-first performs O(n) feature extractions and has a runtime of O(n log(n)) assuming a heap is used to extract the argmax at each step and feature extraction is done over local contexts only (Goldberg and Elhadad, 2010). For the non-projective variant of easy-first with D = d, O(dn) feature extractions are required. Thus, for the unrestricted variant where K = D = ∞, Danish Dutch D K UAS LAS CUR NPS UAS LAS CUR NPS 1 ∞90.22 85.51 41.30 84.37 79.11 75.41 32.45 63.55 2 ∞90.28 85.85 59.78 96.91 84.73 81.01 70.59 92.44 3 ∞90.68 86.07 65.22 98.82 85.03 81.65 77.99 99.01 4 ∞90.58 85.53 69.57 99.69 85.99 82.73 76.85 99.89 5 ∞90.84 86.11 65.22 99.88 85.21 81.93 76.09 99.96 6 ∞90.78 86.31 68.48 99.94 84.57 81.13 75.90 100.0 7 ∞90.64 85.91 63.04 100.0 85.07 82.01 77.04 100.0 4 5 90.74 85.87 66.30 99.69 86.51 82.91 76.66 99.89 5 6 91.00 86.21 72.83 99.88 86.03 82.73 76.09 99.96 Table 2: Experiments with non-projective easyfirst and bounded capacity easy-first with D the arc-distance, K the active token capacity (∞= all tokens of a sentence), UAS and LAS are the unlabeled and labeled accuracy scores, CUR is the recall of crossing edges and NPS shows the percentage of sentences covered in the training set where 100% means all non-projective (and projective) sentences in the training can be parsed and are included in training. O(n2) feature extractions are required. Table 2 explored more practical settings: when K = ∞, D ≤7, the number of feature extractions is back to O(n) with a runtime of O(n log(n)), matching the original easy-first complexity. When both K and D are small constants as in the lower portion of Table 2, the runtime is O(n). 7 Comparison with GR&N13 The work most similar to ours is that of G´omezRodr´ıguez and Nivre (2013) (GR&N13). They define a divisible transition system with the principle to divide the transitions into elementary transitions and then to compose from these elementary transitions complex transitions. GR&N13 identified the following five elementary transitions: SHIFT: (σ, i|β, A) ⇒(σ|i, β, A) UNSHIFT: (σ|i, β, A) ⇒(σ, i|β, A) REDUCE: (σ|i, β, A) ⇒(σ, β, A) LEFT-ARC: (σ|i, j|β, A) ⇒ (σ|i, j|β, A ∪(j →i)) RIGHT-ARC: (σ|i, j|β, A) ⇒ (σ|i, j|β, A ∪(i →j)) The notion of function composition is used to combine the elementary transitions to the complex ones. For instance, arc-standard would have three actions: SHIFT; RIGHT-ARC ⊕REDUCE; LEFTARC ⊕REDUCE. The first difference we can note is the GR&N13 cannot instantiate transition systems that produce non-projective trees. This is surely a superficial 157 difference, as GR&N13 could easily add transitions with larger arc-distance or even SWAP actions (Nivre, 2009). However, a less superficial difference is that our generalization uses control parameters to construct instantiations of transition systems, instead of solely via transition composition like GR&N13. Through this, our generalization results in a minimal departure from ‘standard’ representations of these systems. While this may seem like a notational difference, this is particularly a benefit with respect to implementation, as previous techniques for classification and feature extraction can largely be reused. For example, in GR&N13, the definition of the easy-first transition system (Goldberg and Elhadad, 2010) is complex, e.g., a RIGHT-ARC at position i requires a compositional transition of i SHIFT actions, a RIGHT-ARC, a SHIFT, a REDUCE, then i UNSHIFT actions. Note, that this means in any implementation of this generalization, the output space for a classifier will be very large. Furthermore, the feature space would ultimately need to take the entire sentence into consideration, considering that all compositional actions are centered on the same state. In our transition system, on the other hand, easy-first operates almost as it does in its native form, where n LEFT-ARC and n RIGHT-ARC actions are ranked relative to each other. There are only two actions, each instantiated for every location in the state. Thus the output space and feature extraction are quite natural. This leads to straight-forward implementations allowing for easy experimentation and discovery. Unlike GR&N13, we present empirical results for both known transition systems as well as some novel systems (Section 6). 8 Conclusion We presented a generalized transition system that is capable of representing and executing a wide range of transition systems within one single implementation. These transition systems include systems such as arc-standard, arc-eager, easy-first. Transitions can be freely composed of elementary operations. The transition system shows perfect alignment between the elementary operations on one hand and their preconditions and the oracle on the other hand. We adjust the transition system to work on a stack in a uniform way starting at a node on the stack and ending with the top node of the stack. The results produced by this system are more comparable as they can be executed with the same classifier and feature extraction system. Finally, we would like to highlight two insights that the experiments provide. First, a few more active tokens than two can boost the accuracy level of an arc-standard transition system towards the level of an easy-first transition system. These parsing systems maintain very nicely the linear complexity of the arc-standard transition system while they provide a higher accuracy similar to those of easy-first. Second, non-projective trees can be parsed by allowing a larger arc-distance which is a simple way to allow for non-projective edges. We think that the transition systems with more active tokens or the combination with edges that span over more words provide very attractive transition systems for possible future parsers. Acknowledgments We would like to thank the anonymous reviewers, Slav Petrov and Michael Collins for valuable comments on earlier versions of this manuscript. References Steven Abney. 1991. Parsing by chunks. In Robert Berwick, Steven Abney, and Carol Tenny, editors, Principle-Based Parsing, pages 257–278. Kluwer. Chris Alberti, David Weiss, Greg Coppola, and Slav Petrov. 2015. Improved transition-based parsing and tagging with neural networks. In Proceedings of EMNLP 2015. Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL), pages 166– 170. Miguel Ballesteros and Joakim Nivre. 2013. Going to the roots of dependency parsing. Computational Linguistics, 39(1):5–13. Bernd Bohnet and Jonas Kuhn. 2012. The best of both worlds – a graph-based completion model for transition-based parsers. In Proceedings of the 13th Conference of the European Chpater of the Association for Computational Linguistics (EACL), pages 77–87. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical 158 Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 1455–1465. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Empirical Methods in Natural Language Processing. Jinho D Choi and Martha Palmer. 2011. Getting the most out of transition-based dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. Shay B Cohen, Carlos G´omez-Rodr´ıguez, and Giorgio Satta. 2011. Exact inference for generative probabilistic non-projective dependency parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1234–1245. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 112–119. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC). Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL 2015, pages 334–343. Association for Computational Linguistics. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT), pages 742–750. Carlos G´omez-Rodr´ıguez and Joakim Nivre. 2010. A transition-based parser for 2-planar dependency structures. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1492–1501. Carlos G´omez-Rodr´ıguez and Joakim Nivre. 2013. Divisible transition systems and multiplanar dependency parsing. Computational Linguistics, 39(4):799–845. Johan Hall, Jens Nilsson, Joakim Nivre, G¨ulsen Eryi˘git, Be´ata Megyesi, Mattias Nilsson, and Markus Saers. 2007. Single malt or blended? A study in multilingual parser optimization. In Proceedings of the CoNLL Shared Task of EMNLPCoNLL 2007, pages 933–939. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2012. Incremental joint approach to word segmentation, pos tagging, and dependency parsing in chinese. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1045–1053. Sandra K¨ubler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Morgan and Claypool. Taku Kudo and Yuji Matsumoto. 2000. Japanese dependency structure analysis based on support vector machines. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in NLP and Very Large Corpora, pages 18–25. Marco Kuhlmann, Carlos G´omez-Rodr´ıguez, and Giorgio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 673–682. Ji Ma, Jingbo Zhu, Tong Xiao, and Nan Yang. 2013. Easy-first pos tagging and dependency parsing with beam search. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 110–114, Sofia, Bulgaria, August. Association for Computational Linguistics. Joakim Nivre, Johan Hall, and Jens Nilsson. 2006a. Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), pages 2216–2219. Joakim Nivre, Johan Hall, Jens Nilsson, G¨ulsen Eryi˘git, and Svetoslav Marinov. 2006b. Labeled pseudo-projective dependency parsing with support vector machines. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL), pages 221–225. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT), pages 149–160. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34:513–553. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACLIJCNLP), pages 351–359. Emily Pitler and Ryan McDonald. 2015. A lineartime transition system for crossing interval trees. In Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT), pages 662–671. 159 Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34:151–175. Kenji Sagae and Jun’ichi Tsujii. 2008. Shift-reduce dependency DAG parsing. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING), pages 753–760. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 173–180, Stroudsburg, PA, USA. Association for Computational Linguistics. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of ACL 2015, pages 323–333. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT), pages 195–206. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 562–571. Hao Zhang and Ryan McDonald. 2014. Enforcing structural diversity in cube-pruned dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 656–661, Baltimore, Maryland, June. Association for Computational Linguistics. Yue Zhang and Joakim Nivre. 2011. Transition-based parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL). 160
2016
15
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1589–1599, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM Ivan Habernal† and Iryna Gurevych†‡ †Ubiquitous Knowledge Processing Lab (UKP) Department of Computer Science, Technische Universit¨at Darmstadt ‡Ubiquitous Knowledge Processing Lab (UKP-DIPF) German Institute for Educational Research www.ukp.tu-darmstadt.de Abstract We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation “A is more convincing than B” exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman’s correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses. 1 Introduction What makes a good argument? Despite the recent achievements in computational argumentation, such as identifying argument components (Habernal and Gurevych, 2015; Habernal and Gurevych, 2016), finding evidence for claims (Rinott et al., 2015), or predicting argument structure (Peldszus and Stede, 2015; Stab and Gurevych, 2014), this question remains too hard to be answered. Even Aristotle claimed that perceiving an argument as a “good” one depends on multiple factors (Aristotle and Kennedy (translator), 1991) — not only the logical structure of the argument (logos), but also on the speaker (ethos), emotions (pathos), or context (cairos) (Schiappa and Nordin, 2013). Experiments also show that different audiences perceive the very same arguments differently (Mercier and Sperber, 2011). A solid body of argumentation research has been devoted to the quality of arguments (Walton, 1989; Johnson and Blair, 2006), giving more profound criteria that “good” arguments should fulfill. However, the empirical evidence proving applicability of many theories falls short on everyday arguments (Boudry et al., 2015). Since the main goal of argumentation is persuasion (Nettel and Roque, 2011; Mercier and Sperber, 2011; Blair, 2011; OKeefe, 2011) we take a pragmatic perspective on qualitative properties of argumentation and investigate a new high-level task. We asked whether we could quantify and predict how convincing an argument is. Prompt: Should physical education be mandatory in schools? Stance: Yes! Argument 1 Argument 2 physical education should be mandatory cuhz 112,000 people have died in the year 2011 so far and it’s because of the lack of physical activity and people are becoming obese!!!! YES, because some children don’t understand anything excect physical education especially rich children of rich parents. Figure 1: Example of an argument pair. If we take Argument 1 from Figure 1, assigning a single “convincingness score” is highly subjective, given the lack of context, reader’s prejudice, beliefs, etc. However, when comparing both arguments from the same example, one can decide that A1 is probably more convincing than A2, because it uses at least some statistics, addresses the health 1589 factor, and A2 is just harsh and attacks.1 We adapt pairwise comparison as our backbone approach. We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rockt¨aschel et al., 2016). Main contributions of this article are (1) large annotated dataset consisting of 16k argument pairs with 56k reasons in natural language (700k tokens), (2) thorough investigation of the annotated data with respect to properties of convincingness as a measure, (3) a SVM model and end-to-end BLSTM model. The annotated data, licensed under CC-BY-SA license, and the experimental code are publicly available at https://github.com/UKPLab/ acl2016-convincing-arguments. We hope it will foster future research in computational argumentation and beyond. 2 Related Work Recent years can be seen as a dawn of computational argumentation – an emerging sub-field of NLP in which natural language arguments and argumentation are modeled, searched, analyzed, generated, and evaluated. The main focus has been paid to analyzing argument structures, under the umbrella entitled argumentation mining. Web discourse as a data source has been exploited in several tasks in argumentation mining, such as classifying propositions in user comments into three classes (verifiable experiential, verifiable non-experiential, and unverifiable) (Park and Cardie, 2014), or mapping argument components to Toulmin’s model of argument in user-generated Web discourse (Habernal and Gurevych, 2015), to name a few. While these approaches are crucial for understanding the structure of an argument, they do not directly address any qualitative criteria of argumentation. Argumentation quality has been an active topic 1These are actual reasons provided by annotators, as will be explained later in Section 3. among argumentation scholars. Walton (1989) discusses validity of arguments in informal logic, while Johnson and Blair (2006) elaborate on criteria for practical argument evaluation (namely Relevance, Acceptability, and Sufficiency). Yet, empirical research on argumentation quality does not seem to reflect these criteria and leans toward simplistic evaluation using argument structures, such as how many premises support a claim (Stegmann et al., 2011), or by the complexity of the analyzed argument scheme (Garcia-Mila et al., 2013). To the best of our knowledge, there have been only few attempts in computational argumentation that go deeper than analyzing argument structures (e.g., (Park and Cardie, 2014) mentioned above). Persing and Ng (2015) model argument strength in persuasive essays using a manually annotated corpus of 1,000 documents labeled with a 1–4 score value. Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference. Bowman et al. (2015) introduced a 570k sentence pairs written by crowd-workers, the largest corpus to date. Whereas their task is to classify whether the sentence pair represents entailment, contradiction, or is neutral (thus heading towards a deep semantic understanding), our goal is to assess the pragmatical properties of the given multiple sentence-long arguments (to which extent they fulfill the goal of persuasion). Moreover, each of our annotated argument pairs is accompanied with five textual reasons that explain the rationale behind the labeler’s decision. This is, to the best of our knowledge, a unique novel feature of our data. Pairwise assessment for obtaining relative preference was examined by (Chen et al., 2013), among many others.2 Their system was tested on ranking documents by their reading difficulty. Relative preference annotations have also been heavily employed in assessing machine translation (Aranberri et al., 2016). By contrast to our work, the underlying relations (reading difficulty 1-12 or better translation) have well known properties of total ordering, while convincingness of arguments is a yet unexplored task, thus no assumptions can be made apriori. There is also a substantial body of work on learning to rank, where also a pairwise approach is widely used (Cao et al., 2007). These methods have been traditionally used in IR, 2See (Shah et al., 2015) for a recent overview. 1590 where the retrieved documents are ranked according to their relevance and pairs of documents are automatically sampled. Employing LSTM for natural language inference tasks has recently gained popularity (Rockt¨aschel et al., 2016; Wang and Jiang, 2016; Cheng et al., 2016). These methods are usually tested on the SNLI data introduced above (Bowman et al., 2015). 3 Data annotation Since assessing convincingness of a single argument directly is a very subjective task with high probability of introducing annotator’s bias (because of personal preferences, beliefs, or background), we cast the problem as a relation annotation task. Given two arguments, one should be selected as more convincing, or they might be both equally convincing (see an example in Figure 1). 3.1 Sampling annotation candidates Sampling large sets of arguments for annotation from the Web poses several challenges. First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance). Finally, we need sources with permissive licenses, which allow us to release the resulting corpus further to the community. These criteria are met by arguments from two debate portals.3 We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, ”Should physical education be mandatory in schools? – yes” is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (A1 and A2) belonging to the same topic; see Figure 1. We automatically selected debates that contained at least 25 top-level4 arguments that were 10-110 words long (the mean for all top-level arguments was 66 ± 130 and the median 36, so we excluded the lengthy outliers in our sampling). We manually filtered out obvious silly debates (e.g., ’Superman vs. Batman’) and ended up with 32 topics (the full topic list is presented together with 3Namely, createdebate.com and procon.org. 4Such arguments directly address the topic and are not a part of a threaded discussion. experimental results later in Table 3). From each topic we automatically sampled 25-35 random arguments and created (n ∗(n −1)/2) argument pairs by combining all selected arguments. Sampling argument pairs only from the same topics and not combining opposite stances was a design decision how to mitigate annotators’ bias.5 The order of arguments A1 and A2 in each argument pair was randomly shuffled. In total we sampled 16,927 argument pairs. 3.2 Crowdsourcing annotations Let us extend our terminology. Worker is a single annotator in Amazon Mechanical Turk. Reason is an explanation why A1 is more convincing than A2 (or the other way round, or why they are equally convincing). Gold reason is a reason whose label matches the gold label in the argument pair (see Figure 2). In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either “A1 is more convincing than A2” (A1>A2), “A1 is less convincing than A2” (A1<A2), or “A1 and A2 are convincing equally” (A1=A2). Moreover, they were obliged to write the reason 30-140 characters long. An example of fully annotated argument pair is shown in Figure 2. The workers were also provided with clear and crisp instructions (e.g., do not judge the truth of the proposition; be objective; do not express your position; etc.). All 16,927 argument pairs were annotated by five workers each (85k assignments in total). We also allowed workers to express their own standpoint toward the topics. While 66% of workers had no standpoint, 14% had the opposite view and 20% the same view. This indicates that there should be no systematic bias in the data. Crowdsourcing took about six weeks in 2016 plus two weeks of pilot studies. In total, about 3,900 workers participated. Total costs including pilot studies and bonus payments were 5,520 USD. 3.3 Quality control and agreement We performed several steps in controlling the quality of the crowdsourced data. First, we allowed only workers from the U.S. with ≥96% acceptance rate to work on the task. Second, we employed MACE (Hovy et al., 2013) for estimating 5As some topics touch the very fundamental human beliefs and values, such as faith, trust, or sexuality, it is hard to put them consciously aside when assessing convincingness. 1591 Argument 1 Argument 2 physical education should be mandatory cuhz 112,000 people have [...] YES, because some children don’t understand anything expect [...] • A1>A2, because A1 uses statistics, and doesn’t make assumptions. • A1>A2, because A1 talks about the importance of health. • A1>A2, because A1 provides a health-related argument. • A1>A2, because A2 is very harsh and attacks • A1=A2, because Neither A1 or A2 cite evidence to support their claims. Figure 2: Example of an argument pair annotated by five workers. The arguments are shortened versions of Figure 1. The explanations (called reasons) after ‘because’ are written by workers; the estimated gold label for this pair is probably A1>A2, thus there are four gold reasons. the true labels and ranking the annotators. We set the MACE’s parameter threshold to 0.95 to keep only instances whose entropy is among the 95% best estimates. Third, we manually checked all the reasons for each worker. With paying more attention to workers with low MACE scores, we rejected all assignments of workers if they (1) copied&pasted the same or very similar reasons across argument pairs, (2) were only copying or rephrasing the texts from the arguments, (3) provided their opinion or were arguing, (4) had many typos or provided obvious nonsense. In total, we rejected 1161 assignments. We do not report any ‘standard’ inter-annotator agreement measures such as Fleiss’ κ or Krippendorff’s α, as their suitability for crowdsourcing has been recently disputed (Passonneau and Carpenter, 2014). However, in order to estimate the human performance, we analyzed the output of the pilot study. For each argument pair, we took the best-ranked worker for that particular pair (worker ranks are globally estimated by MACE) and computed her accuracy against the estimated gold labels.6 The best-ranked worker for each argument pair is not necessarily the globally best-ranked worker; in the pilot study, the average global rank of this hypothetical worker was 11 ± 6.6. This rank can be interpreted as a decently performing worker; the obtained score reached 0.935 accuracy. 6A similar approach was recently reported by Nakov et al. (2016). 3.4 Examining properties of convincingness 3.4.1 What makes a convincing argument? We manually examined a small sample of 200 gold reasons to find out what makes one argument more convincing than the other. A very common type of answer mentioned giving examples or actual reasons (“A1 cited several reasons to back up their argument.”) and facts (“A1 cites an outside source which can be more credible than opinion”). This is not surprising, as argumentation is often perceived as reason giving (Freeley and Steinberg, 2008). Others point out strengths in explaining the reasoning or logical coherence (“A1 gives a succinct and logical answer to the argument. A2 strays away from the argument in the response.”). The confirmation bias (Mercier and Sperber, 2011) also played a role (“A1 argues both viewpoints, A2 chooses a side.”). Given the noisiness of Web data, some of the arguments might be non-sense, which was also pointed out as a reason (“A1 attempts to argue that since porn exists, we should watch it. A2 doesn’t make sense or answer the question.”). Apart from the logical structure of the argument, emotional aspects and rhetorical moves were also spotted (“A1 contributes a viewpoint based on morality, which is a stronger argument than A2, which does not argue for anything at all.”, or “A1 calls for the killing of all politicians, which is an immature knee-jerk reaction to a topic. A2’s argument is more intellectually presented.”). 3.4.2 Transitivity evaluation using argument graphs The previous section shows a variety of reasons that makes one argument more convincing than other arguments. Considering A1 is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convingcingness of two arguments transitive, antisymmetric, and total? In particular, does is exhibit properties such that if A≥B and B≥C, then A≥C (total ordering)? We can treat arguments as nodes in a graph and argument pairs as graph edges. We will denote such graph as argument graph (and use nodes/arguments and edges/pairs interchangeably in this section).7 As the sampled argument pairs 7Argument pair A>B becomes a directed edge A →B 1592 contained all argument pair combinations for each topic, we ended up with an almost fully connected argument graph for each topic (remember that we discarded 5% of argument pair annotations with lowest reliability). We further investigate the properties of the argument graphs. Transitivity is only guaranteed, if the argument graph is a DAG (directed acyclic graph). Building argument graph from crowdsourced argument pairs We build the argument graph iteratively by sampling annotated argument pairs and adding them as graph edges (see Algorithm 1). We consider two possible scenarios in the graph building algorithm. In the first scenario, we accept only argument pairs without equivalency (thus A>B is allowed but A=B is forbidden and discarded). The second scenario accepts all pairs, but since the resulting graph must be DAG, equivalent arguments are merged into one node. We use Johnson’s algorithm for finding all elementary cycles in DAG (Johnson, 1975). Argument pair weights By building argument graph from all pairs, introducing cycles into the graph seems to be inevitable, given a certain amount of noise in the annotations. We asked the following question: to which extent does occurrence of cycles in an argument graph depend on the quality of annotations? We thus compute a weight for each argument pair. Let ei be a particular annotation pair (edge). Let Gi be all labels in that pair that match the predicted gold label, and Oi opposite labels (different from the gold label). Let v be a single worker’s vote and cv a global worker’s competence score. Then the weight w of edge ei is computed as follows: wei = σ  X v∈Gi cv −λ X v∈Oi cv   (1) where σ is a sigmoid function σ = 1 1+e−x to squeeze the weight into the (0, 1) interval and λ is a penalty for opposite labels (we set empirically λ to 10.0 to ensure strict penalization). For example, if the predicted gold label from Figure 2 were A1>A2, then Gi would contain four votes and Oi one vote (the last one). This weight allows us to sort argument pairs before sampling them for building the argument in the argument graph. graph. We test three following strategies. As a baseline, we use random shuffling (Rand), where no prior information about the weight of the pairs is given. The other two sorting algorithms use the argument pair weight computed by Equation 1. As the worst case scenario, we sort the pairs in ascending order (Asc), which means that the “worse” pairs come first to the graph building algorithm. We used this scenario to see how much the prior pair weight information actually matters, because building a graph preferably from bad pair label estimates should cause more harm. Finally, the Desc algorithm sorts the pairs given their weight in descending order (the “better” estimates come first). Algorithm 1: Building DAG from sorted argument pairs. input : argumentPairs; sortingAlg output: DAG SortPairs(argumentPairs, sortingAlg); finalPairs ←[]; foreach pair in argumentPairs do currentPairs ←[finalPairs, pair ]; /* cluster edges labeled as equal so they will be treated as a single node */ clusters ←clusterEqNodes(currentPairs); /* wire the pairs into directed graph */ g ←buildGraph(currentPairs, clusters); if hasCycles(g) then // report about breaking DAG else finalPairs += pair; return buildGraph(finalPairs); Measuring transitivity score We measure how “good” the graph is by a transitivity score. Here we assume that the graph is a DAG. Given two nodes A and Z, let PL be the longest path between these nodes and PS the shortest path, respectively. For example, let PL = A →B →C →Z and PS = A →D →Z. Then the transitivity score is the ratio of longest and shortest path |PL| |PS|. (which is 1.5 is our example). The average transitivity score is then an average of transitivity scores for each pair of nodes from the graph that are connected by two or more paths. Analogically, the maximum transitivity score is the maximal value. We restrict the shortest path to be a direct edge only. The motivation for the transitivity score is the following. If the longest path between A and Z (A →. . . →Z) consists of 10 other nodes, than the total ordering property requires that there also exists a direct edge A →Z. This is indeed em1593 Figure 3: Final argument graph example (topic: “Christianity or Atheism? Atheism”). Node labels: I/O/Tr (I: incoming edges, O: outgoing edges, Tr: maximum transitivity score). Lighter nodes have prevailing O; larger nodes have higher absolute number of O. pirically confirmed by the presence of the shortest path between A and Z. Thus the longer the longest path and the shorter the shortest path are on average, the bigger empirical evidence is given about the transitivity property. Figure 3 shows an example of argument graph built using only non-equivalent pairs and desc prior sort or argument pairs. There are few “bad” arguments in the middle (many incoming edges, none outcoming) and few very convincing arguments (large circles). Notice the high maximum transitivity score even for medium-sized nodes. Observations First, let us compare the different sorting algorithms for each sampling strategy. As Table 1 shows, on average, 158 pairs are ignored in total when all pairs are used for sampling (26 removed by MACE and 132 by the graph building algorithm), while 164 pairs are ignored when only non-equivalent pairs are sampled (129 had already been removed apriori—26 by MACE and 103 as equivalent pairs—and 35 by the graph algorithm). The results show a tendency that, when sampling annotated argument pairs for building a DAG, sorting argument pairs by their weight based on workers’ scores influences the number of pairs that break the DAG by introducing cycles. In particular, starting with more confident argumentation pairs, the graph grows bigger while keeping its DAG consistency. The presence of the equal relation causes cycles to break the DAG sooner as compared to argument pairs in which one argument is more convincing than the other. We interpret this finding as that it is easier for humans to judge A>B than A=B consistently across all possible pairs of arguments from a given topic. 3.4.3 Gold-standard corpora Our experiments show that convincingness between a pair of arguments exhibits properties of strict total order when the possibility of two equally convincing arguments is prohibited. We thus used the above-mentioned method for graph building as a tool for posterior gold data filtering. We discard the equal argument pairs in advance and filter out argument pairs that break the DAG properties. As a result, a set of 11,650 argument pairs labeled as either A>B or A<B remains, which is summarized in Table 2. We call this corpus UKPConvArgStrict. However, since the total strict ordering property of convincingness is only an empirically confirmed working hypothesis, we also propose another realistic application. We construct a mixed graph by treating equal argument pairs (A=B) as undirected edges. Using PageRank, we rank the arguments (nodes) globally. The higher the PageRank for a particular node is, the “less convincing“ the argument is (has a global higher probability of incoming edges). This allows us to rank all arguments for a particular topic. We call this dataset UKPConvArgRank (see Table 2). We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE. We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering. Along the actual argument texts, all the gold-standard corpora contain the reasons as well as full workers’ information and debate meta-data. 4 Experiments We experiment with two machine learning algorithms on two tasks using the two new benchmark corpora (UKPConvArgStrict and UKPConvArgRank). In both tasks, we perform 32-fold cross-topic cross-validation (one topic is test data, remaining 31 topics are training ones). This rather 1594 All pairs No equivalency pairs Rand. Asc. Desc. Rand. Asc. Desc. Fixed values All annotated pairs 529 529 529 529 529 529 Pairs removed apriori 26 26 26 129 129 129 Before first cycle detected Edges in graph 32 16 86 84 37 199 Nodes in graph 20 16 25 32 28 33 Pairs sampled 44 26 132 84 37 199 First cycle length 1,8 2,0 1,5 4,2 3,9 3,4 Final statistics after all pairs sampled avg. Transitivity score 6,4 5,9 6,8 11,1 11,3 10,8 max. Transitivity score 14,5 13,2 15,5 24,4 25,6 24,0 Edges in graph 105 81 114 357 339 365 Nodes in graph 16 14 16 33 33 33 Pairs sampled 339 294 369 356 339 364 Ignored pairs 162 208 132 42 60 35 Table 1: Values averaged over all 32 topics reported by different sampling strategies and scenarios for argument graph building. Dataset Size Instance type Size per topic Gold label distribution Gold reasons a1 a2 eq size tokens UKPConvArgAll 16,081 argument pair 502.5 ± 91.3 6,398 6,394 3,289 56,446 696,537 UKPConvArgStrict 11,650 argument pair 364.1 ± 71.1 5,872 5,778 0 44,121 547,057 UKPConvArgRank 1,052 argument 32.9 ± 3.2 — — Table 2: Properties of resulting gold data. challenging setting ensures that no arguments are seen in both training and test data. 4.1 Predicting convincingness of pairs Since this task is a binary classification and the classes are equally distributed (see Table 2), we report accuracy and average the final score over folds (Forman and Scholz, 2010). Methods As a “traditional” method, we employ SVM with RBF kernel8 based on a large set of rich linguistic features. They include uni- and bi-gram presence, ratio of adjective and adverb endings that may signalize neuroticism (Corney et al., 2002), contextuality measure (Heylighen and Dewaele, 2002), dependency tree depth, ratio of exclamation or quotation marks, ratio of modal verbs, counts of several named entity types, ratio of past vs. future tense verbs, POS n-grams, presence of dependency tree production rules, seven different readability measures (e.g., Ari (Senter and Smith, 1967), Coleman-Liau (Coleman and Liau, 1975), Flesch (Flesch, 1948), and others), five sentiment scores (from very negative to very positive) (Socher et al., 2013), spellchecking using standard Unix words, ratio of superlatives, and some surface features such as sentence lengths, longer words count, etc. The resulting feature vector dimension is about 64k. We also use bidirectional Long Short-Term 8Using LISBVM (Chang and Lin, 2011). Memory (BLSTM) neural network for end-to-end processing.9 The input layer relies on pre-trained word embeddings, in particular GloVe (Pennington et al., 2014) trained on 840B tokens from Common Crawl;10 the embedding weights are further updated during training. The core of the model consists of two bi-directional LSTM networks with 64 output neurons each. Their output is then concatenated into a single drop-out layer and passed to the final sigmoid layer for binary predictions. We train the network with ADAM optimizer (Kingma and Ba, 2015) using binary crossentropy loss function and regularize by early stopping (5 training epochs) and high drop-out rate (0.5) in the dropout layer. For both models, each training/test instance simply concatenates A1 and A2 from the argument pair. Results and error analysis As shown in Table 3, SVM (0.78) outperforms BSLTM (0.76) with a subtle but significant difference. It is also apparent that some topics are more challenging regardless of the system (e.g., “Is it better to have a lousy father or to be fatherless? – Lousy father”). Both systems outperform a simple baseline (lemma ngram presence features with SVM, not reported in detail, achieved 0.65 accuracy) but still do not reach the human upper bounds (0.93 as reported in Section 3.3). 9Using http://keras.io/ 10http://nlp.stanford.edu/projects/ glove/ 1595 Topic SVM BLSTM Ban Plastic Water Bottles? No .85 .76 Yes .90 .83 Christianity or Atheism Atheism .81 .80 Christianity .68 .75 Evolution vs. Creation Creation .84 .88 Evolution .66 .77 Firefox vs. Internet Explorer IE .84 .81 Firefox .82 .78 Gay marriage - right or wrong? Right .76 .74 Wrong .82 .87 Should parents use spanking? No .84 .78 Yes .79 .68 If your spouse committed murder [...] No .71 .64 Yes .79 .72 India has the potential to lead the world No .82 .77 Yes .69 .79 Is it better to have a lousy father or to be fatherless? Fatherless .77 .69 Lousy father .67 .60 Is porn wrong? No .82 .79 Yes .85 .85 Is the school uniform a good or bad idea? Bad .75 .78 Good .83 .74 Pro choice vs. Pro life Choice .71 .68 Life .79 .80 Should physical edu. be mandatory? No .79 .80 Yes .79 .78 TV is better than books No .78 .73 Yes .78 .75 Personal pursuit or common good? Common .72 .78 Personal .67 .68 Farquhar as the founder of Singapore No .79 .63 Yes .85 .76 Average .78 .76 Table 3: Accuracy results on UKPConvArgStrict data. The difference between SVM and bidirectional LSTM is significant, p = 0.0414 using two-tailed Wilcoxon signed-rank test. We examined about fifty random false predictions to gain some insight into the limitations of both systems. We looked into argument pairs, in which both methods failed, as well as into instances where only one model was correct. BLSTM won in few cases by properly catching jokes or off-topic arguments; SVM was properly catching all-upper-case arguments (considered as less convincing). By examining failures common to both systems, we found several cases where the prediction was wrong due to very negative sentiment (which might be a sign of the less convincing argument), but in other cases an argument with strong negative sentiment was actually the more convincing one. In general, we did not find any tendency on failures; they were also independent of the worker assignments distribution, thus not caused by likely ambiguous (hard) instances. SVM BLSTM p-value Pearson’s r .351 .270 ≪0.01 Spearman’s ρ .402 .354 ≪0.01 Table 4: Correlation results on UKPConvArgRank. 4.2 Ranking arguments We address this problem as a regression task. We use the UKPConvArgRank data, in which a realvalue score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently). The task is thus to predict a real-value score for each argument from the test topic (remember that we use 32-fold cross validation). We measure Spearman’s and Pearson’s correlation coefficients on all results combined (not on each fold separately). Without any modifications, we use the same SVM and features as described in Section 4.1. Regarding the BLSTM, we only replace the output layer with a linear activation function and optimize mean absolute error loss. Table 4 shows that SVM outperforms BLSTM. All correlations are highly statistically significant. 4.3 Results discussion Although the “traditional” SVM with rich linguistic features outperforms BLSTM in both tasks, there are other aspects to be considered. First, the employed features require heavy languagespecific preprocessing machinery (lemmatizer, POS tagger, parser, NER, sentiment analyzer). By contrast, BLSTM only requires pre-trained embedding vectors, while delivering comparable results. Second, we only experimented with vanilla LSTMs. Recent developments of deep neural networks (especially attention mechanisms or gridLSTMs) open up many future possibilities to gain performance in this end-to-end task. 5 Conclusion and future work We propose a novel task of predicting Web argument convincingness. We crowdsourced a large corpus of 16k argument pairs over 32 topics and used global constraints based on transitivity properties of convincingness relation for cleaning the data. We experimented with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman’s correlation in a crosstopic scenario. We release the newly created corpus UKPConvArg1 and the experimental software 1596 under free licenses.11 To the best of our knowledge, we are the first who deal with argument convincingness in Web data on such a large scale. In the current article, we have only slightly touched the annotated natural text reasons. We believe that the presence of 44k reasons (550k tokens) is another important asset of the newly created corpus, which deserves future investigation. Acknowledgements This work has been supported by the Volkswagen Foundation as part of the LichtenbergProfessorship Program under grant No I/82806, by the German Institute for Educational Research (DIPF), by the German Research Foundation (DFG) via the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1), by the GRK 1994 AIPHES (DFG), and by Amazon Web Services in Education Grant award. Lastly, we would like to thank the anonymous reviewers for their valuable feedback. References Nora Aranberri, Gorka Labaka, Arantza D´ıaz de Ilarraza, and Kepa Sarasola. 2016. Ebaluatoia: crowd evaluation for English-Basque machine translation. Language Resources and Evaluation. In press. Aristotle and George Kennedy (translator). 1991. On Rhetoric: A Theory of Civil Discourse. Oxford University Press, USA. J. Anthony Blair. 2011. Argumentation as rational persuasion. Argumentation, 26(1):71–81. Maarten Boudry, Fabio Paglieri, and Massimo Pigliucci. 2015. The Fake, the Flimsy, and the Fallacious: Demarcating Arguments in Real Life. Argumentation, 29(4):431–456. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: From pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning, ICML ’07, pages 129–136, New York, NY, USA. ACM. 11https://github.com/UKPLab/ acl2016-convincing-arguments Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27. Xi Chen, Paul N. Bennett, Kevyn Collins-Thompson, and Eric Horvitz. 2013. Pairwise ranking aggregation in a crowdsourced setting. In Proceedings of the sixth ACM international conference on Web search and data mining - WSDM ’13, page 193, New York, New York, USA. ACM Press. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long Short-Term Memory-Networks for Machine Reading. arXiv. Meri Coleman and T. L. Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60:283–284. Malcolm Corney, Olivier de Vel, Alison Anderson, and George Mohay. 2002. Gender-preferential text mining of e-mail discourse. In Proceedings of the 18th Annual Computer Security Applications Conference (ACSAC02), pages 282–289. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343, Beijing, China, July. Association for Computational Linguistics. Rudolf Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32:221–233. George Forman and Martin Scholz. 2010. Apples-toApples in Cross-Validation Studies: Pitfalls in Classifier Performance Measurement. ACM SIGKDD Explorations Newsletter, 12(1):49–57. Austin J. Freeley and David L. Steinberg. 2008. Argumentation and Debate. Cengage Learning, Stamford, CT, USA, 12th edition. Merce Garcia-Mila, Sandra Gilabert, Sibel Erduran, and Mark Felton. 2013. The effect of argumentative task goal on the quality of argumentative discourse. Science Education, 97(4):497–523. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. Book in preparation for MIT Press. Ivan Habernal and Iryna Gurevych. 2015. Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2127–2137, Lisbon, Portugal, September. Association for Computational Linguistics. 1597 Ivan Habernal and Iryna Gurevych. 2016. Argumentation Mining in User-Generated Web Discourse. Computational Linguistics. Under review. http://arxiv.org/abs/1601.02403. Francis Heylighen and Jean-Marc Dewaele. 2002. Variation in the contextuality of language: An empirical measure. Foundations of Science, 7(3):293– 340. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning Whom to Trust with MACE. In Proceedings of NAACL-HLT 2013, pages 1120–1130, Atlanta, Georgia. Association for Computational Linguistics. Ralph H. Johnson and Anthony J. Blair. 2006. Logical Self-Defense. International Debate Education Association. Donald B. Johnson. 1975. Finding all the elementary circuits of a directed graph. SIAM Journal on Computing, 4(1):77–84. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA. Hugo Mercier and Dan Sperber. 2011. Why do humans reason? Arguments for an argumentative theory. The Behavioral and Brain Sciences, 34(2):57– 74; discussion 74–111. Preslav Nakov, Sara Rosenthal, Svetlana Kiritchenko, Saif M. Mohammad, Zornitsa Kozareva, Alan Ritter, Veselin Stoyanov, and Xiaodan Zhu. 2016. Developing a successful SemEval task in sentiment analysis of Twitter and other social media texts. Language Resources and Evaluation, 50(1):35–65. Ana Laura Nettel and Georges Roque. 2011. Persuasive argumentation versus manipulation. Argumentation, 26(1):55–69. Daniel J. OKeefe. 2011. Conviction, persuasion, and argumentation: Untangling the ends and means of influence. Argumentation, 26(1):19–32. Joonsuk Park and Claire Cardie. 2014. Identifying appropriate support for propositions in online user comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29–38, Baltimore, Maryland, June. Association for Computational Linguistics. Rebecca J. Passonneau and Bob Carpenter. 2014. The Benefits of a Model of Annotation. Transactions of the Association for Computational Linguistics, 2:311–326. Andreas Peldszus and Manfred Stede. 2015. Joint prediction in MST-style discourse parsing for argumentation mining. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 938–948, Lisbon, Portugal, September. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543–552, Beijing, China, July. Association for Computational Linguistics. Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J. Guibas, and Jascha Sohl-Dickstein. 2015. Deep Knowledge Tracing. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, Advances in Neural Information Processing Systems 28, pages 505—-513, Montreal, CA. Curran Associates, Inc. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence - an automatic method for context dependent evidence detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 440–450, Lisbon, Portugal, September. Association for Computational Linguistics. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of the 2016 International Conference on Learning Representations (ICLR), pages 1–9. Edward Schiappa and John P. Nordin. 2013. Argumentation: Keeping Faith with Reason. Pearson UK, 1st edition. J. R. Senter and E. A. Smith. 1967. Automated readability index. Technical report AMRL-TR-66-220, Aerospace Medical Research Laboratories, Ohio. Nihar B. Shah, Sivaraman Balakrishnan, Joseph K. Bradley, Abhay Parekh, Kannan Ramchandran, and Martin J. Wainwright. 2015. Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology Dependence. arXiv preprint, abs/1505.01462. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA, October. Association for Computational Linguistics. 1598 Christian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 46–56, Doha, Qatar, October. Association for Computational Linguistics. Karsten Stegmann, Christof Wecker, Armin Weinberger, and Frank Fischer. 2011. Collaborative argumentation and cognitive elaboration in a computer-supported collaborative learning environment. Instructional Science, 40(2):297–323. Douglas N. Walton. 1989. Informal Logic: A Handbook for Critical Argument. Cambridge University Press. Shuohang Wang and Jing Jiang. 2016. Learning Natural Language Inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, page (to appear), San Diego, CA, June. Association for Computational Linguistics. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain Neural Network Language Generation for Spoken Dialogue Systems. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, page (to appear). 1599
2016
150
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1600–1609, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Discovery of Treatments from Text Corpora Christian Fong Stanford University 655 Serra Street Stanford, CA 94305, USA [email protected] Justin Grimmer Stanford University 616 Serra Street Stanford, CA 94305, USA [email protected] Abstract An extensive literature in computational social science examines how features of messages, advertisements, and other corpora affect individuals’ decisions, but these analyses must specify the relevant features of the text before the experiment. Automated text analysis methods are able to discover features of text, but these methods cannot be used to obtain the estimates of causal effects—the quantity of interest for applied researchers. We introduce a new experimental design and statistical model to simultaneously discover treatments in a corpora and estimate causal effects for these discovered treatments. We prove the conditions to identify the treatment effects of texts and introduce the supervised Indian Buffet process to discover those treatments. Our method enables us to discover treatments in a training set using a collection of texts and individuals’ responses to those texts, and then estimate the effects of these interventions in a test set of new texts and survey respondents. We apply the model to an experiment about candidate biographies, recovering intuitive features of voters’ decisions and revealing a penalty for lawyers and a bonus for military service. 1 Introduction Computational social scientists are often interested in inferring how blocks of text, such as messages from political candidates or advertising content, affect individuals’ decisions (Ansolabehere and Iyengar, 1995; Mutz, 2011; Tomz and Weeks, 2013). To do so, they typically attempt to estimate the causal effect of the text: they model the outcome of interest, Y , as a function of the block of text presented to the respondent, t, and define the treatment effect of t relative to some other block of text t′ as Y (t) −Y (t′) (Rubin, 1974; Holland, 1986). For example, in industrial contexts researchers design A/B tests to compare two potential texts for a use case. Academic researchers often design one text that has a feature of interest and another text that lacks that feature but is otherwise identical (for example, (Albertson and Gadarian, 2015)). Both kinds of experiments assume researchers already know the features of text to vary and offer little help to researchers who would like to discover the features to vary. Topic models and related methods can discover important features in corpora of text data, but they are constructed in a way that makes it difficult to use the discovered features to estimate causal effects (Blei et al., 2003). Consider, for example, supervised latent Dirichlet allocation (sLDA) (Mcauliffe and Blei, 2007). It associates a topicprevalence vector, θ, with each document where the estimated topics depend upon both the content of documents and a label associated with each document. If K topics are included in the model, then θ is defined on the K −1-dimensional unit simplex. It is straightforward to define a treatment effect as the difference between two treatments θ and θ ′ (or points on the simplex) Y (θ) −Y (θ ′). It is less clear how to define the marginal effect of any one dimension. This is because bigger values on some dimensions implies smaller values on other dimensions, making the effect of any one topic necessarily a combination of the differences obtained when averaging across all the dimensions (Aitchison, 1986; Katz and King, 1999). This problem will befall all topic models because the zero-sum nature of the topic-prevalence vector implies that increasing the prevalence of any one topic necessarily decreases the prevalence of some 1600 other topic. The result is that it is difficult (or impossible) to interpret the effect of any one topic marginalizing over the other topics. Other applications of topic models to estimate causal effects treat text as the response, rather than the treatment (Roberts et al., 2016). And still other methods require a difficult to interpret assumption of how text might affect individuals’ responses (Beauchamp, 2011). To facilitate the discovery of treatments and to address the limitation of existing unsupervised learning methods, we introduce a new experimental design, framework, and statistical model for discovering treatments within blocks of text and then reliably inferring the effects of those treatments. By doing so, we combine the utility of discovering important features in a topic model with the scientific value of causal treatment effects estimated in a potential outcomes framework. We present a new statistical model—the supervised Indian Buffet Process—to both discover treatments in a training set and infer the effects treatments in a test set (Ghahramani and Griffiths, 2005). We prove that randomly assigning blocks of text to respondents in an experiment is sufficient to identify the effects of latent treatments that comprise blocks of text. Our framework provides the first of its kind approach to automatically discover treatment effects in text, building on literatures in both social science and machine learning (Blei et al., 2003; Beauchamp, 2011; Mcauliffe and Blei, 2007; Roberts et al., 2016). The use of the training and test set ensures that this discovery does not come at the expense of credibly inferring causal effects, insulating the research design from concerns about “p-hacking” and overfitting (Ioannidis, 2005; Humphreys et al., 2013; Franco et al., 2014). Critically, we use a theoretical justification for our methodology: we select our particular approach because it enables us to estimate causal effect of interest. Rather than demonstrating that our method performs better at some predictive task, we prove that our method is able to estimate useful causal effects from the data. We apply our framework to study how features of a political candidate’s background affect voters’ decisions. We use a collection of candidate biographies collected from Wikipedia to automatically discover treatments in the biographies and then infer their effects. This reveals a penalty for lawyers and career politicians and a bonus for military service and advanced degrees. While we describe our procedure throughout the paper, we summarize our experimental protocol and strategy for discovering treatment effects in Table 1. Table 1: Experimental Protocol for Discovering and Estimating Treatment Effects 1) Randomly assign texts, Xj, to respondents 2) Obtain response Yi for each respondent. 3) Divide texts and responses into training and test set 4) In training set: a) Use supervised Indian Buffet Process (sIBP) applied to documents and responses to infer latent treatments in texts b) Model selection via quantitative fit and qualitative assessment 5) In test set: a) Use sIBP trained on training set to infer latent treatments on test set documents b) Estimate effect of treatments with regression, with a bootstrap procedure to estimate uncertainty 2 A Framework for Discovering Treatments from Text Our goal is to discover a set of features— treatments—underlying texts and then estimate the effect of those treatments on some response from an individual. We first show that randomly assigning texts to respondents is sufficient to identify treatment effects. We then provide a statistical model for using both the text and responses to discover latent features in the text that affect the response. Finally, we show that we can use the mapping from text to features discovered on a training set to estimate the presence of features in a test set, which allows us to estimate treatment effects in the test set. 1601 2.1 Randomizing Texts Identifies Underlying Treatment Effects When estimating treatment effects, researchers often worry that the respondents who received one treatment systematically differ from those who received some other treatment. In a study of advertising, if all of the people who saw one advertisement were men and all of the people who saw a different advertisement were women, it would be impossible to tell whether differences in their responses were driven by the fact that they saw different advertisements or by their pre-existing differences. Randomized experiments are the gold standard for overcoming this problem (Gerber and Green, 2012). However, in text experiments, individuals are randomly assigned to blocks of text rather than to the latent features of the text that we analyze as the treatments. In this section, we show that randomly assigning blocks of text is sufficient to identify treatment effects. To establish our result, we suppose we have a corpora of J texts, X. We represent a specific text with Xj ∈X, with Xj ∈ℜD. Throughout we will assume that we have standardized the variable Xj to be a per-document word usage rate with each column normalized to have mean zero and variance one. We have a sample of N respondents from a population, with the response of individual i to text j[i] given by the potential outcome Yi(Xj[i]). We use the notation j[i] because multiple individuals may be assigned to the same text; if i and i′ are assigned to the same text, then j[i] = j[i′]. We suppose that for each document j there is a corresponding vector of K binary treatments Zj ∈Z where Z contains all 2K possible combinations of treatments, {0, 1}K. The function g : X →Z maps from the texts to the set of binary treatments: we will learn this function using the supervised Indian Buffet process introduced in the next section. Note that distinct elements of X may map to the same element of Z. To establish our identification result, we assume (Assumption 1) Yi(X) = Yi(Xj[i]) for all i. This assumption ensures that each respondent’s treatment assignment depends only on her assigned text, a version of the Stable Unit Treatment Value Assumption (SUTVA) for our application (Rubin, 1986). We also assume (Assumption 2) that Yi(Xj[i]) = Yi(g(Xj[i])) for all Xj[i] ∈X and all i, or that Zj[i] is sufficent to describe the effect of a document on individual i’s response. Stated differently, we assume an individual would respond in the same way to two different texts if those texts have the same latent features. We further suppose (Assumption 3) that texts are randomly assigned to respondents according to probability measure h, ensuring that Yi(g(Xj[i])) ⊥⊥Xj[i] for all Xj[i] ∈X and for all individuals i. This assumption ensures unobserved characteristics of individuals are not confounding inferences about the effects of texts. The random assignment of texts to individuals induces a distribution over a probability measure on treatment vectors Z, f(Z) = R X 1(Z = g(X))h(X)dX. Finally, we assume (Assumption 4) that f(Z) > 0 for all Z ∈Z.1 This requires that every combination of treatment effects is possible from the documents in our corpus. In practice, when designing our study we want to ensure that the treatments are not aliased or perfectly correlated. If perfect correlation exists between factors, we are unable to disentangle the effect of individual factors. In this paper we focus on estimating the Average Marginal Component Specific Effect for factor k (AMCEk) (Hainmueller et al., 2014).2 The AMCEk is useful for finding the effect of one feature, k, when k interacts with the other features in some potentially complicated way. It is defined as the difference in outcomes when the feature is present and when it is not present, averaged over the values of all of the other features. Formally, AMCEk = R Z−k E [Y (Zk = 1, Z−k) −Y (Zk = 0, Z−k)] m(Z−k)dZ−k where m(Z−k) is some analyst-defined density on all elements but k of the treatment vector. For example, m(·) can be chosen as the density of Z−k in the population to obtain the marginal component effect of k in the empirical population. The most commonly used m(·) in applied work is uniform across all Z−k’s, and we follow this convention here. We now prove that assumptions 1, 2, 3, and 4 are sufficient to identify the AMCEk for all k. Proposition 1. Assumptions 1, 2, 3, and 4 are suf1Note for this assumption to hold it is necessary, but not sufficient that g is a surjection from X onto Z. 2The procedure here can be understood as a method for discovering the treatments that are imposed by assumption in conjoint analysis, as presented by (Hainmueller et al., 2014). We deploy the regression estimator used in conjoint analysis as a subroutine of our procedure (see Step 5b in Table 1), but otherwise our experimental design, statistical method, and proof is distinct. 1602 ficient to identify the AMCEk for arbitrary k. Proof. To obtain a useful form, we first marginalize over the documents to obtain, R Z−k R X E [Y (Zk = 1, Z−k)] f(Z−k|Zk = 1, X) −E [Y (Zk = 0, Z−k)] f(Z−k|Zk = 0, X)h(X)dXdZ−k = R Z−k E [Y (Zk = 1, Z−k)] f(Z−k|Zk = 1) −E [Y (Zk = 0, Z−k)] f(Z−k|Zk = 0)dZ−k . Where f(Z−k|Zk = 1) and f(Z−k|Zk = 0) are the induced distributions over latent features from averaging over documents. If f(Z−k|Zk = 0) = f(Z−k|Zk = 1) = m(Z−k) then this is the AMCEk. Otherwise consider m(Z) > 0 for all Z ∈Z. Because f(Z) > 0, f(Z−k|Zk = 0) > 0 and f(Z−k|Zk = 1) > 0. Thus, there exists conditional densities h(Z|Zk = 1) and h(Z|Zk = 0) such that f(Z−k|Zk=1) h(Z−k|Zk=1) = f(Z−k|Zk=0) h(Z−k|Zk=0) = m(Z−k) 2.2 A Statistical Model for Identifying Features The preceding section shows that if we are able to discover features in the data, we can estimate their AMCEs by randomly assigning texts to respondents. We now present a statistical model for discovering those features. As we argued in the introduction, it is difficult to use the topics obtained from topic models like sLDA because the topic vector exists on the simplex. When we compare the outcomes associated with two different topic vectors, we do not know whether the change in the response is caused by increasing the degree to which the document about one topic or decreasing the degree to which it is about another, because the former mathematically entails the latter. Other models, such as LASSO regression, would necessarily suppose that the presence and absence of words are the treatments (Hastie et al., 2001; Beauchamp, 2011). This is problematic substantively, because it is hard to know exactly what the presence or absence of a single word implies as a treatment in text. We therefore develop the supervised Indian Buffet Process (sIBP) to discover features in the document. For our purposes, the sIBP has two essential properties. First, it produces a binary topic vector, avoiding the complications of treatments assigned on the simplex. Second, unlike the Indian Buffet Process upon which it builds (Ghahramani and Griffiths, 2005), it incorporates information about the outcome associated with various texts, and therefore discovers features that explain both the text and the response.3 Figure 1 describes the posterior distribution for the sIBP and a summary of the posterior is given in Equation 1. We describe the model in three steps: the treatment assignment process, document creation, and response. The result is a model that creates a link between document content and response through a vector of treatment assignments. Treatment Assignment We assume that π is a K-vector (where we take the limit as K →∞) where πk describes the population proportion of documents that contain latent feature k. We suppose that π is generated by the stick-breaking construction (Doshi-Velez et al., 2009). Specifically, we suppose that ηk ∼Beta(α, 1) for all K. We label π1 = η1 and for each remaining topic, we assume that πk = Qk z=1 ηz. For document j and topic k, we suppose that zj,k ∼Bernoulli(πk), which importantly implies that the occurrence of treatments are not zero sum. We collect the treatment vector for document j into Zj and collect all the treatment vectors into Z an Ntexts × K binary matrix, where Ntexts refers to number of unique documents. Throughout we will assume that Ntexts = N or that the number of documents and responses are equal and index the documents with i. Document Creation We suppose that the documents are created as a combination of latent factors. For topic k we suppose that Ak is a D−dimensional vector that maps latent features onto observed text. We collect the vectors into A, a K × D matrix, and suppose that Xi ∼ MVN(ZiA, σ2 nID), where Xi,d is the standardized number of times word d appears in document i. While it is common to model texts as draws from multinomial distributions, the multi3We note that there is a different model also called the supervised Indian Buffet Process (Quadrianto et al., 2013). There are fundamental differences between the model presented here and the sIBP in (Quadrianto et al., 2013). Their outcome is a preference relation tuple, while ours is a realvalued scalar. Because of this difference, the two models are fundamentally different. This leads to a distinct data generating process, model inference procedures, and inferences of features on the test set. To leverage the analogy between LDA and sLDA vis a vis IBP and sIBP, we overload the term sIBP in our paper. We expect that in future applications of sIBP, it will be clear from the context which sIBP is being employed. 1603 Z X A π Y β τ Figure 1: Graphical Model for the Supervised Indian Buffet Process variate normal distribution is useful for our purposes for two reasons. First, we normalize our data by transforming each column X·,d to be mean 0 and variance 1, ensuring that the multivariate normal distribution captures the overall contours of the data. Note that this implies that Xi,d can be negative. Second, we show that assuming a multivariate normal for document generation results in parameters that capture the distinctive rate words are used for each latent feature (Doshi-Velez et al., 2009). Response to Treatment Vector We assume that a K−vector of parameters β describes the relationship between the treatment vector and response. Specifically, we use a standard parameterization and suppose that τ ∼Gamma(a, b), β ∼ MVN(0, τ −1) and that Yi ∼Normal(Ziβ, τ −1). πk ∼ Stick-Breaking (α) zi,k ∼ Bernoulli(πk) Xi|Zi, A ∼ MVN(ZiA, σ2 XID) Ak ∼ MVN(0, σ2 AID) Yi|Zi, β ∼ Normal(Ziβ, τ −1) τ ∼ Gamma(a, b) β|τ ∼ MVN(0, τ −1IK) (1) 2.2.1 Inference for the Supervised Indian Buffet Process We approximate the posterior distribution with a variational approximation, building on the algorithm introduced in (Doshi-Velez et al., 2009). We approximate the nonparametric posterior setting K to be large and use a factorized approximation, assuming that p(π, Z, A, β, τ|X, Y , α, σ2 A, σ2 X, a, b) = q(π)q(A)q(Z)q(β, τ) A standard derivation that builds on (DoshiVelez et al., 2009) leads to the following distributional forms and update steps: • q(πK) = Beta(πk|λk). The update values are λk,1 = α K + PN i=1 νi,k and λk,2 = 1 + PN i=1(1 −νi,k). • q(Ak) = Multivariate Normal(Ak|¯φk, Φk). The updated parameter values are, ¯φk = h 1 σ2 X PN i=1 νi,k  Xi − P l:l̸=k νi,l ¯φl i Φk Φk =  1 σ2 A + PN i=1 νi,k σ2 X −1 I • q(β, τ) = Multivariate Normal(β|m, S) × Gamma(τ|c, d). The updated parameter values are, m = SE[ZT ]Y S = (E[ZT Z] + IK)−1 c = a + N 2 d = b + YT Y −YT E[Z]SE[ZT ]Y 2 Where typical element of E[ZT ]j,k = νj,k and typical on-diagonal element of E[ZT Z]k,k = PN i=1 νi,k and off-diagonal element is E[ZT Z]j,k = PN i=1 νi,jνi,k. • q(zi,k) = Bernoulli(zi,k|νi,k). The updated parameter values are vi,k = ψ(λk,1) −ψ(λk,2) − 1 2σ2 X [−2¯φkXT i +(tr(Φk) + ¯φk ¯φT k ) + 2¯φk P l:l̸=k νi,l ¯φT l  ] −c 2d(−2mkYi +  dSk,k c−1 + mT k mk  + 2mk P l:l̸=k νi,lml  ) νi,k = 1 1+exp(−vi,k) where ψ(·) is the digamma function. We repeat the algorithm until the change in the parameter vector drops below a threshold. To select the final model using the training set data, we perform a two-dimensional line search over values of α and σX.4 We then run the model 4We assign σA, a, and b values which lead to diffuse priors. 1604 several times for each combination of values for α and σX to evaluate the output at several different local modes. To create a candidate set of models, we use a quantitative measure that balances coherence and exclusivity (Mimno et al., 2011; Roberts et al., 2014). Let Ik be the set of documents for which νi,k ≥0.5, and let IC k be the complement of this set. We identify the top ten words for intervention k as the ten words with the largest value in Ak, tk and define Nk = PN i=1 I{νi,k ≥0.5}. We then obtain measure CE for a particular model CE = PK k=1 Nk P l,c∈tk cov(XIk,l, XIk,c) − PK k=1(N −Nk) P l,c∈tk cov(XIC k ,l, XIC k ,c) where here XIk,l refers to the lth column and Ikth rows of X. We make a final model selection based on the model that provides the most substantively clear treatments. 2.3 Inferring Treatments and Estimating Effects in Test Set To discover the treatment effects, we first suppose that we have randomly assigned a set of respondents a text based treatment Xi according to some probability measure h(·) and that we have observed their response Yi. We collect the assigned texts into X and the responses into Y . As we describe below, we will often assign each respondent their own distinctive message, with the probability of receiving any one message at 1 N for all respondents and messages. We use the sIBP model trained our training set documents and responses to infer the effect of those treatments among the test set documents. Separating the documents and responses into training and test sets ensures that Assumption 1, SUTVA, holds. We learn the mapping from texts to binary vectors in the training set, ˆg(·) and then apply this mapping to the test set to infer the latent treatments present in the test set documents, without considering the test set responses. Dividing texts and responses into training and test sets provides a solution to SUTVA violations present in other attempts at causal inference in text analysis (Roberts et al., 2014). We approximate the posterior distribution for the treatment vectors using the variational approximation from the training set parameters (bλ, b¯φ, bΦ, c m, bS, bc, bd, c σ2 X, c σ2 A) and a modified update step on q(ztest i,k ). In this modified update step, we remove the component of the update that incorporates information about the outcome. Specifically for individual i in the test set for category k we have the following update step vtest i,k = ψ(d λk,1) −ψ(d λk,2) − 1 2( c σ2 X) × [−2c ¯φk(XT i ) + (tr(c Φk) + c ¯φk(c ¯φk)T ) + 2c ¯φk P l:l̸=k νi,l  b¯φl T  ] νtest i,k = 1 1+exp(−vtest i,k). For each text in the test set we repeat this update several times until νtest has converged. Note that for the test set we have excluded the component of the model that links the latent features to the response, ensuring that SUTVA holds. With the approximating distribution q(Ztest) we then measure the effect of the treatments in the test set. Using the treatments, the most straightforward model to estimate assumes that there are no interactions between each of the components. Under the no interactions assumption, we estimate the effects of the treatments and infer confidence intervals using the following bootstrapping procedure that incorporates uncertainty both from estimation of treatments and uncertainty about the effects of those treatments: 1) For each respondent i and component k we draw ˜zi,k ∼Bernoulli(νtest i,k ), resulting in matrix ˜ Z. 2) Given the matrix ˜ Z, we sample (Y test, ˜ Z) with replacement and for each sample estimate the regression Y test = βtest ˜Z + ϵ. We repeat the bootstrap steps 1000 times, keeping βtest for each iteration. The result of the procedure is a point estimate of the effects and confidence interval of the treatments under no interactions. Technically, it is possible to estimate the treatment effects in our variational approximation. But we estimate the effects in a second-stage regression because variational approximations tend to understate uncertainty, the bootstrap provides a straightforward method for including uncertainty from estimation of the latent features and the effect estimates, and it ensures that SUTVA is not violated. 3 Application: Voter Evaluations of an Ideal Candidate We demonstrate our method in an experiment to assess how features of a candidate’s background affect respondents evaluations of the candidates. There is a rich literature in political science about 1605 the ideal attributes of political candidates (Canon, 1990; Popkin, 1994; Carnes, 2012; Campbell and Cowley, 2014). We build on this literature and use a collection of candidate biographies to discover features of candidates’ backgrounds that voters find appealing. To uncover the features of candidate biographies that voters are responsive to we acquired a collection of 1,246 Congressional candidate biographies from Wikipedia. We then anonymize the biographies—replacing names and removing other identifiable information—to ensure that the only information available to the respondent was explicitly present in the text. In Section 2.1 we show that a necessary condition for this experiment to uncover latent treatments is that each vector of treatments has nonzero probability of occuring. This is equivalent to assuming that none of the treatments are aliased, or perfectly correlated (Hainmueller et al., 2014). Aliasing would be more likely if there are only a few distinct texts that are provided to participants in our experiment. Therefore, we assign each respondent in each evaluation round a distinct candidate biography. To bolster our statistical power, we ask our respondents to evaluate up to four distinct candidate biographies, resulting in each respondent evaluating 2.8 biographies on average.5 After presenting the respondents with a candidate’s biography, we ask each respondent to rate the candidate using a feeling thermometer: a well-established social science scale that goes from 0 when a respodent is “cold” to a candidate to 100 when a respondent is “warm” to the candidate. We recruited a sample of 1,886 participants using Survey Sampling International (SSI), an online survey platform. Our sample is census matched to reflect US demographics on sex, age, race, and education. Using the sample we obtain 5,303 total observations. We assign 2,651 responses to the training set and 2,652 to the test set. We then apply the sIBP process to the training data. To apply the model, we standardize the feeling thermometer to have mean zero and standard deviation 1. We set K to a relatively low value (K = 10) reflecting a quantitative and qualitative search over K. We then select the final model varying the parameters 5The multiple evaluations of candidate biographies is problematic if there is spillover across rounds of our experiment. We have little reason to believe observing one candidate biography would systematically affect the response in subsequent rounds. and evaluating the CE score. Table 2 provides the top words for each of the ten treatments the sIBP discovered in the training set. We selected ten treatments using a combination of guidance from the sIBP, assessment using CE scores, and our own qualitative assessment of the models (Grimmer and Stewart, 2013). While it is true that our final selection depends on human input, some reliance on human judgment at this stage is appropriate. If one set includes a treatment about military service but not legal training and another set includes a treatment about legal training but not military service, then model selection is tantamount to deciding which hypotheses are most worthy of investigation. Our CE scores identify sets of treatments that are most likely to be interesting, but the human analyst should make the final decision about which hypotheses he would like to test. However, it is extremely important for the analyst to select a set of treatments first and only afterwards estimate the effects of those treatments. If the analyst observes the effects of some treatments and then decides he would like to test other sets, then the integrity of any p-values he might calculate are undermined by the multiple testing problem. A key feature of our procedure is that it draws a clear line between the selection of hypotheses to test (which leverages human judgment) and the estimation of effects (which is purely mechanical). The estimated treatments cover salient features of Congressional biographies from the time period that we analyze. For example, treatments 6 and 10 capture a candidate’s military experience. Treatment 5 and 7 are about previous political experience and Treatment 3 and 9 refer to a candidate’s education experience. Clearly, there are many features of a candidate’s background missing, but the treatments discovered provide a useful set of dimensions to assess how voters respond to a candidate’s background. Further, the discovered treatments are a combination of those that are both prevalent in the biographies and have an effect on the thermometer rating. The absence of biographical features that we might think matters for candidate evaluation could be because there are few of those biographies in our data set, or because the respondents were unresponive to those features. After training the model on the training set, we apply it to the test set to infer the treatments in the biographies. We assume there are no interactions 1606 Treatment 1 Treatment 2 Treatment 3 Treatment 4 Treatment 5 appointed fraternity director received elected school graduated distinguished university washington university house governor war ii received years democratic worked chapter president death seat older air force master arts company republican law firm phi phd training served elected reserve policy military committee grandfather delta public including appointed office air master george washington defeated legal states air affairs earned bachelors office Treatment 6 Treatment 7 Treatment 8 Treatment 9 Treatment 10 united states republican star law war military democratic bronze school law enlisted combat elected germany law school united states rank appointed master arts juris doctor assigned marine corps member awarded student army medal incumbent played earned juris air distinguished political yale earned law states army air force father football law firm year states air served maternal university school service air state division body president officer Table 2: Top Words for 10 Treatments sIBP Discovered −2.5 0.0 2.5 5.0 1 2 3 4 5 6 7 8 9 10 Feature Effect on Feeling Thermometer Figure 2: 95% Confidence Intervals for Effects of Discovered Treatments: The mean value of the feeling thermometer is 62.3 between the discovered treatments in order to estimate their effects.6 Figure 2 shows the point estimate and 95-percent confidence intervals, which take into account uncertainty in inferring the treatments from the texts and the relationship between those treatments and the response. The treatment effects reveal intuitive, though interesting, features of candidate biographies that affect respondent’s evaluations. For example, Figure 2 reveals a distaste for political and legal experience—even though a large share of Congressional candidates have previous political ex6This assumption is not necessary for the framework we propose here. Interaction effects could be modeled, but it would require us to make much stronger parametric assumptions using a method for heterogeneous treatments such as (Imai and Ratkovic, 2013). perience and a law degree. Treatment 5, which describes a candidate’s previous political experience, causes an 2.26 point reduction in feeling thermometer evaluation (95 percent confidence interval, [-4.26,-0.24]). Likewise, Treatment 9 shows that respondents dislike lawyers, with the presence of legal experience causing a 2.34 point reduction in feeling thermometer (95-percent confidence interval, [-4.28,-0.29]). The aversion to lawyers is not, however, an aversion to education. Treatment 3, a treatment that describes advanced degrees, causes a 2.43 point increase in feeling thermometer evaluations (95-percent confidence interval, [0.49,4.38]). In contrast, Figure 2 shows that there is a consistent bonus for military experience. This is consistent with intuition from political observers that the public supports veterans. For example, treatment 6, which describes a candidate’s military record, causes a 3.21 point increase in feeling thermometer rating (95-percent confidence interval, [1.34,5.12]) and treatment 10 causes a 4.00 point increase (95-percent confidence interval, [1.53,6.45]). Because simultaneously discovering treatments from labeled data and estimating their average marginal component effects is a novel task, we cannot compare the performance of our framework against any benchmark. Even so, one natural question is whether the user could obtain much more coherent topics by foresaking the estimation of causal effects and using a more traditional topic 1607 modeling method. We provide the topics discovered by sLDA in Table 3. sIBP discovered most of the same features sLDA did. Both find military service, legal training, political background, and higher education. The Greek life feature is less coherent in sIBP than it is in sLDA, and sLDA finds business and ancestry features that sIBP does not. Both have a few incoherent treatments. This comparison suggests that sIBP does almost as well as sLDA at identifying coherent latent features, while also facilitating the estimation of marginal treatment effects. 4 Conclusion We have presented a methodology for discovering treatments in text and then inferring the effect of those treatments on respondents’ decisions. We prove that randomizing texts is sufficient to identify the underlying treatments and introduce the supervised Indian Buffet process for discovering the effects. The use of a training and test set ensures that our method provides accurate confidence intervals and avoids the problems of overfitting or “p-hacking” in experiments. In an application to candidate biographies, we discover a penalty for political and legal experience and a bonus for military service and non-legal advanced degrees. Our methodology has a wide variety of applications. This includes numerous alternative experimental designs, providing a methodology that computational social scientists could use widely to discover and then confirm the effects of messages in numerous domains—including images and other high dimensional data. The methodology is also useful for observational data—for studying the effects of complicated treatments, such as how a legislator’s roll call voting record affects their electoral support. Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation References John Aitchison. 1986. The Statistical Analysis of Compositional Data. Chapman and Hall. Bethany Albertson and Shana Kushner Gadarian. 2015. Anxious Politics: Democratic Citizenship in a Threatening World. Cambridge University Press. Stephen Ansolabehere and Shanto Iyengar. 1995. Going Negative: How Political Advertisements Shrink and Polarize The Electorate. Simon & Schuster, Inc. Nick Beauchamp. 2011. A bottom-up approach to linguistic persuasion in advertising. The Political Methodologist. David Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning and Research, 3:993–1022. Rosie Campbell and Philip Cowley. 2014. What voters want: Reactions to candidate characteristics in a survey experiment. Political Studies, 62(4):745–765. David T. Canon. 1990. Actors, Athletes, and Astronauts: Political Amateurs in the United States Congress. University of Chicago Press. Nicholas Carnes. 2012. Does the numerical underrepresentation of the working class in congress matter? Legislative Studies Quarterly, 37(1):5–34. Finale Doshi-Velez, Kurt T. Miller, Jurgen Van Gael, and Yee Whye Teh. 2009. Variational inference for the indian buffet process. Technical Report, University of Cambridge. Annie Franco, Neil Malhotra, and Gabor Simonovits. 2014. Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203):1502– 1505. Alan S. Gerber and Donald P. Green. 2012. Field Experiment: Design, Analysis, and Interpretation. W.W. Norton & Company. Zoubin Ghahramani and Thomas L Griffiths. 2005. Infinite latent feature models and the indian buffet process. In Advances in neural information processing systems, pages 475–482. Justin Grimmer and Brandon M. Stewart. 2013. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis, 21(3):267–297. Jens Hainmueller, Daniel Hopkins, and Teppei Yamamoto. 2014. Causal inference in conjoint analysis: Understanding multi-dimensional choices via stated preference experiments. Political Analysis, 22(1):1–30. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2001. The Elements of Statistical Learning. Springer. 1608 Treatment 1 Treatment 2 Treatment 3 Treatment 4 Treatment 5 years father school family united states work n´ee medical father states army national mother college white served united worked business public schools parents war board irish attended year war ii young son county mother army local long city brother served director family born married service social ancestry schools years lieutenant community descent studied played military Treatment 6 Treatment 7 Treatment 8 Treatment 9 Treatment 10 fraternity elected company board law school member republican officer graduated law student served united staets college school law phi army mexico bachelor arts attorney delta democratic air force harvard juris doctor kappa member years state university bar hall house representatives military professor law firm chi state national guard masters degree court graduated senate insurance high school law degree son election business economics judge Table 3: Top Words for 10 Treatments sLDA Discovered Paul Holland. 1986. Statistics and causal inference. Journal of the American Statistical Association, 81(396):945–960. Macartan Humphreys, Raul Sanchez de la Sierra, and Peter van der Windt. 2013. Fishing, commitment, and communication: A proposal for comprehensive nonbinding research registration. Political Analysis, 21(1):1–20. Kosuke Imai and Marc Ratkovic. 2013. Estimating treatment effect heterogeneity in randomized program evaluation. The Annals of Applied Statistics, 7(1):443–470. John P. A. Ioannidis. 2005. Why most published research findings are false. PLoS Medicine, 2(8):696– 701. Jonathan Katz and Gary King. 1999. A statistical model for multiparty electoral data. The American Political Science Review, 93(1):15–32. Jon D. Mcauliffe and David M. Blei. 2007. Supervised topic models. In Advances in Neural Information Processing Systems 20 (NIPS 2007). David Mimno, Hanna M Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 262– 272. Diana C Mutz. 2011. Population-Based Survey Experiments. Princeton University Press. Samuel L Popkin. 1994. The Reasoning Voter: Communication and Persuasion in Presidential Campaigns. University of Chicago Press. Novi Quadrianto, Viktoriia Sharmanska, David A. Knowles, and Zoubin Ghahramani. 2013. The supervised ibp: Neighbourhood preserving infinite latent feature models. page 101. Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Chris Lucas, Jetson Leder-Luis, Bethany Albertson, Shana Gadarian, and David Rand. 2014. Topic models for open ended survey responses with applications to experiments. American Journal of Political Science. Margaret E. Roberts, Brandon M. Stewart, and Edo M. Airoldi. 2016. A model of text for experimentation in the social sciences. Journal of the American Statistical Association. Forthcoming. Don Rubin. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66:688–701. Donald B. Rubin. 1986. Statistics and causal inference: Comment: Which ifs have causal answers. Journal of the American Statistical Association, 81(396):961–962. Michael Tomz and Jessica Weeks. 2013. Public opinion and the democratic peace. American Political Science Review, 107(4):849–865. 1609
2016
151
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1610–1620, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning Structured Predictors from Bandit Feedback for Interactive NLP Artem Sokolov⋄,∗and Julia Kreutzer∗and Christopher Lo†,∗and Stefan Riezler‡,∗ ∗Computational Linguistics & ‡IWR, Heidelberg University, Germany {sokolov,kreutzer,riezler}@cl.uni-heidelberg.de †Department of Mathematics, Tufts University, Boston, MA, USA [email protected] ⋄Amazon Development Center, Berlin, Germany Abstract Structured prediction from bandit feedback describes a learning scenario where instead of having access to a gold standard structure, a learner only receives partial feedback in form of the loss value of a predicted structure. We present new learning objectives and algorithms for this interactive scenario, focusing on convergence speed and ease of elicitability of feedback. We present supervised-to-bandit simulation experiments for several NLP tasks (machine translation, sequence labeling, text classification), showing that bandit learning from relative preferences eases feedback strength and yields improved empirical convergence. 1 Introduction Structured prediction from partial information can be described by the following learning protocol: On each of a sequence of rounds, the learning algorithm makes a prediction, and receives partial information in terms of feedback on the predicted point. This single-point feedback is used to construct a parameter update that is an unbiased estimate of the respective update rule for the full information objective. In difference to the full information scenario, the learner does not know what the correct prediction looks like, nor what would have happened if it had predicted differently. This learning scenario has been investigated under the names of learning from bandit feedback1 or rein∗The work for this paper was done while the authors were at Heidelberg University. 1The name is inherited from a model where in each round a gambler pulls an arm of a different slot machine (“onearmed bandit”), with the goal of maximizing his reward relative to the maximal possible reward, without apriori knowledge of the optimal slot machine. See Bubeck and CesaBianchi (2012) for an overview. forcement learning2, and has (financially) important real world applications such as online advertising (Chapelle et al., 2014). In this application, the probability that an ad will be clicked (and the advertiser has to pay) is estimated by trading off exploration (a new ad needs to be displayed in order to learn its click-through rate) and exploitation (displaying the ad with the current best estimate is better in the short term) in displaying ads to users. Similar to the online advertising scenario, there are many potential applications to interactive learning in NLP. For example, in interactive statistical machine translation (SMT), user feedback in form of post-edits of predicted translations is used for model adaptation (Bertoldi et al., 2014; Denkowski et al., 2014; Green et al., 2014). Since post-editing feedback has a high cost and requires professional expertise of users, weaker forms of feedback are desirable. Sokolov et al. (2015) showed in a simulation experiment that partial information in form of translation quality judgements on predicted translations is sufficient for model adaptation in SMT. However, one drawback of their bandit expected loss minimization algorithm is the slow convergence speed, meaning that impractically many rounds of user feedback would be necessary for learning in real-world interactive SMT. Furthermore, their algorithms requires feedback in form of numerical assessments of translation quality. Such absolute feedback is arguably harder to elicit from human users than relative judgements. The goal of this work is a preparatory study of different objectives and algorithms for structured prediction from partial information with real-world interactive scenarios in mind. Since the algorithm of Sokolov et al. (2015) can be characterized as stochastic optimization of a non-convex 2See Szepesv´ari (2009) for an overview of algorithms for reinforcement learning and their relation to bandit learning. 1610 objective, a possible avenue to address the problem of convergence speed is a (strong) convexification of the learning objective, which we formalize as bandit cross-entropy minimization. To the aim of easing elicitability of feedback, we present a bandit pairwise preference learning algorithm that requires only relative feedback in the form of pairwise preference rankings. The focus of this paper is on an experimental evaluation of the empirical performance and convergence speed of the different algorithms. We follow the standard practice of early stopping by measuring performance on a development set, and present results of an extensive evaluation on several tasks with different loss functions, including BLEU for SMT, Hamming loss for optical character recognition, and F1 score for chunking. In our experiments, we use a standard supervisedto-bandit transformation where a reward signal is simulated by evaluating a task loss against gold standard structures without revealing them to the learning algorithm (Agarwal et al., 2014). From the perspective of real-world interactive applications, bandit pairwise preference learning is the preferred algorithm since it only requires comparative judgements for learning. This type of relative feedback been shown to be advantageous for human decision making (Thurstone, 1927). However, in our simulation experiments we found that relative feedback also results in improved empirical convergence speed for bandit pairwise preference learning. The picture of fastest empirical convergence of bandit pairwise preference learning is consistent across different tasks, both compared to bandit expected loss minimization and bandit cross-entropy minimization. Given the improved convergence and the ease of elicitability of relative feedback, the presented bandit pairwise preference learner is an attractive choice for interactive NLP tasks. 2 Related Work Reinforcement learning (RL) has the goal of maximizing the expected reward for choosing an action at a given state in a Markov Decision Process (MDP) model, where rewards are received at each state or once at the final state. The algorithms in this paper can be seen as one-state MDPs where choosing an action corresponds to predicting a structured output. Most closely related are RL approaches that use gradient-based optimization of a parametric policy for action selection (Bertsekas and Tsitsiklis, 1996; Sutton et al., 2000). Policy gradient approaches have been applied to NLP tasks by Branavan et al. (2009), Chang et al. (2015) or Ranzato et al. (2016). Bandit learning operates in a similar scenario of maximizing the expected reward for selecting an arm of a multi-armed slot machine. Similar to our case, the models consist of a single state, however, arms are usually selected from a small set of options while structures are predicted over exponential output spaces. While bandit learning is mostly formalized as online regret minimization with respect to the best fixed arm in hindsight, we investigate asymptotic convergence of our algorithms. In the spectrum of stochastic (Auer et al., 2002a) versus adversarial bandits (Auer et al., 2002b), our approach takes a middle path by making stochastic assumptions on inputs, but not on rewards. Most closely related are algorithms that optimize parametric models, e.g., contextual bandits (Langford and Zhang, 2007; Li et al., 2010) or combinatorial bandits (Dani et al., 2007; Cesa-Bianchi and Lugosi, 2012). To the best of our knowledge, these types of algorithms have not yet been applied in the area of NLP. Pairwise preference learning has been studied in the full information supervised setting (see Herbrich et al. (2000), Joachims (2002), Freund et al. (2003), Cortes et al. (2007), F¨urnkranz and H¨ullermeier (2010), inter alia) where given preference pairs are assumed. Stochastic optimization from two-point (or multi-point) feedback has been investigated in the framework of gradient-free optimization (see Yue and Joachims (2009), Agarwal et al. (2010), Ghadimi and Lan (2012), Jamieson et al. (2012), Duchi et al. (2015), inter alia), while our algorithms can be characterized as stochastic gradient descent algorithms. 3 Probabilistic Structured Prediction 3.1 Full Information vs. Bandit Feedback The objectives and algorithms presented in this paper are based on the well-known expected loss criterion for probabilistic structured prediction (see Och (2003), Smith and Eisner (2006), Gimpel and Smith (2010), Yuille and He (2012), He and Deng (2012), inter alia). The objective is defined as a minimization of the expectation of a given task loss function with respect to the conditional distribution over structured outputs. This criterion 1611 has the form of a continuous, differentiable, and in general, non-convex objective function. More formally, let X be a structured input space, let Y(x) be the set of possible output structures for input x, and let ∆y : Y →[0, 1] quantify the loss ∆y(y′) suffered for predicting y′ instead of the gold standard structure y; as a rule, ∆y(y′) = 0 iff y = y′. In the full information setting, for a data distribution p(x, y), the learning criterion is defined as minimization of the expected loss with respect to w ∈Rd where Ep(x,y)pw(y′|x)  ∆y(y′)  (1) = X x,y p(x, y) X y′∈Y(x) ∆y(y′)pw(y′|x). Assume further that output structures given inputs are distributed according to an underlying Gibbs distribution (a.k.a. conditional exponential or loglinear model) pw(y|x) = exp(w⊤φ(x, y))/Zw(x), where φ : X × Y →Rd is a joint feature representation of inputs and outputs, w ∈Rd is an associated weight vector, and Zw(x) is a normalization constant. For this model, the gradient of objective (1) is as follows: ∇Ep(x,y)pw(y′|x)  ∆y(y′)  = Ep(x,y)pw(y′|x) h ∆y(y′) φ(x, y′) −Epw(y′|x)[φ(x, y′)]  i . (2) Unlike in the full information scenario, bandit feedback in structured prediction means that the gold standard output structure y, with respect to which the objective function is evaluated, is not revealed to the learner. Thus we can neither evaluate the task loss ∆nor calculate the gradient (2) of the objective function (1). A solution to this problem is to pass the evaluation of the loss function to the user, i.e, we access the loss directly through user feedback without assuming existence of a fixed reference y. We indicate this by dropping the subscript referring to the gold standard structure in the definition of ∆. In all algorithms presented below we need to make the following assumptions: 1. We assume a sequence of input structures xt, t = 1, . . . , T that are generated by a fixed, unknown distribution p(x). Algorithm 1 Bandit Expected Loss Minimization 1: Input: sequence of learning rates γt 2: Initialize w0 3: for t = 0, . . . , T do 4: Observe xt 5: Calculate Epwt(y|xt)[φ(xt, y)] 6: Sample ˜yt ∼pwt(y|xt) 7: Obtain feedback ∆(˜yt) 8: wt+1 = wt −γt ∆(˜yt) 9: × φ(xt, ˜yt) −Epwt[φ(xt, y)]  Algorithm 2 Bandit Pairwise Preference Learning 1: Input: sequence of learning rates γt 2: Initialize w0 3: for t = 0, . . . , T do 4: Observe xt 5: Calculate Epwt(⟨yi,yj⟩|xt)[φ(xt, ⟨yi, yj⟩)] 6: Sample ⟨˜yi, ˜yj⟩t ∼pwt(⟨yi, yj⟩|xt) 7: Obtain feedback ∆(⟨˜yi, ˜yj⟩t) 8: wt+1 = wt −γt ∆(⟨˜yi, ˜yj⟩t) 9: × φ(xt, ⟨˜yi, ˜yj⟩t)−Epwt[φ(xt, ⟨yi, yj⟩)]  2. We use a Gibbs model as sampling distribution to perform simultaneous exploitation (use the current best estimate) / exploration (get new information) on output structures. 3. We use feedback to the sampled output structures to construct a parameter update rule that is an unbiased estimate of the true gradient of the respective objective. 3.2 Learning Objectives and Algorithms Bandit Expected Loss Minimization. Algorithm 1 has been presented in Sokolov et al. (2015) and minimizes the objective below by stochastic gradient descent optimization. It is non-convex for the specific instantiations in this paper: Ep(x)pw(y|x) [∆(y)] (3) = X x p(x) X y∈Y(x) ∆(y)pw(y|x). Intuitively, the algorithm compares the sampled feature vector to the average feature vector, and performs a step into the opposite direction of this difference, the more so the higher the loss of the sampled structure is. In the extreme case, if the sampled structure is correct (∆(˜yt) = 0), no update is performed. 1612 Algorithm 3 Bandit Cross-Entropy Minimization 1: Input: sequence of learning rates γt 2: Initialize w0 3: for t = 0, . . . , T do 4: Observe xt 5: Sample ˜yt ∼pwt(y|xt) 6: Obtain feedback g(˜yt) 7: wt+1 = wt −γt g(˜yt) pwt(˜yt|xt) 8: × −φ(xt, ˜yt) + Epwt[φ(xt, ˜yt)]  Bandit Pairwise Preference Learning. Decomposing complex problems into a series of pairwise comparisons has been shown to be advantageous for human decision making (Thurstone, 1927) and for machine learning (F¨urnkranz and H¨ullermeier, 2010). For our case, this idea can be formalized as an expected loss objective with respect to a conditional distribution of pairs of structured outputs. Let P(x) = {⟨yi, yj⟩|yi, yj ∈ Y(x)} denote the set of output pairs for an input x, and let ∆(⟨yi, yj⟩) : P(x) →[0, 1] denote a task loss function that specifies a dispreference of yi compared to yj. Instantiating objective (3) to the case of pairs of output structures defines the following objective: Ep(x)pw(⟨yi,yj⟩|x) [∆(⟨yi, yj⟩)] . (4) Stochastic gradient descent optimization of this objective leads to Algorithm 2. The objective is again non-convex in the use cases in this paper. Minimization of this objective will assure that high probabilities are assigned to pairs with low loss due to misranking yj over yi. Stronger assumptions on the learned probability ranking can be made if assumptions of transitivity and asymmetry of the ordering of feedback structures are made. For efficient sampling and calculation of expectations, we assume a Gibbs model that factorizes as follows: pw(⟨yi, yj⟩|x) = ew⊤(φ(x,yi)−φ(x,yj)) P ⟨yi,yj⟩∈P(x) ew⊤(φ(x,yi)−φ(x,yj)) = pw(yi|x)p−w(yj|x). If a sample from the p−w distribution is preferred over a sample from the pw distribution, this is a strong signal for model correction. Bandit Cross-Entropy Minimization. The standard theory of stochastic optimization predicts considerable improvements in convergence speed depending on the functional form of the objective. This motivates the formalization of convex upper bounds on expected normalized loss as presented in Green et al. (2014). Their objective is based on a gain function g : Y →[0, 1] (in this work, g(y) = 1 −∆(y)) that is normalized over n-best lists where ¯g(y) = g(y) Zg(x) and Zg(x) = P y∈n-best(x) g(y). It can be seen as the cross-entropy of model pw(y|x) with respect the “true” distribution ¯g(y): Ep(x)¯g(y) [−log pw(y|x)] (5) = − X x p(x) X y∈Y(x) ¯g(y) log pw(y|x). For a proper probability distribution ¯g(y), an application of Jensen’s inequality to the convex negative logarithm function shows that objective (5) is a convex upper bound on objective (3). However, normalizing the gain function is prohibitive in a bandit setting since it would require to elicit user feedback for each structure in the output space or n-best list. We thus work with an unnormalized gain function which sacrifices the upper bound but preserves convexity. This can be seen by rewriting the objective as the sum of a linear and a convex function in w: Ep(x)g(y) [−log pw(y|x)] (6) = − X x p(x) X y∈Y(x) g(y)w⊤φ(x, y) + X x p(x)(log X y∈Y(x) exp(w⊤φ(x, y)))α(x), where α(x) = P y∈Y(x) g(y) is a constant factor not depending on w. The gradient of objective (6) is as follows: ∇(− X x p(x) X y∈Y(x) g(y) log pw(y|x)) = Ep(x)ps(y|x)  g(y) ps(y|x) −φ(x, y) + Epw(y|x)[φ(x, y)]  . Minimization of this objective will assign high probabilities to structures with high gain, as desired. Algorithm 3 minimizes this objective by sampling from a distribution ps(y|x), receiving feedback, and updating according to the ratio of gain versus current probability of the sampled structure. A positive ratio expresses a preference 1613 of the sampled structure under the gain function compared to the current probability estimate. We compare the sampled feature vector to the average feature vector, and we update towards the sampled feature vector relative to this ratio. We instantiate ps(y|x) to the current update of pwt(y|x) in order to present progressively more useful structures to the user. In contrast to Algorithms 1 and 2, each update is thus affected by a probability that changes over time and is unreliable when training is started. This further increases the variance already present in stochastic optimization. We deal with this problem by clipping too small sampling probabilities (Ionides, 2008) or by reducing variance using momentum techniques (Polyak, 1964). 3.3 Remarks on Theoretical Analysis Convergence of our algorithms can be analyzed using results of standard stochastic approximation theory. For example, Sokolov et al. (2015) analyze the convergence of Algorithm 1 in the pseudogradient framework of Polyak and Tsypkin (1973), relying on the fact that a positive inner product of the update vector with the gradient in expectation suffices for convergence. Sokolov et al. (2016) analyze convergence in the framework of stochastic first-order optimization of Ghadimi and Lan (2012), relying on the fact that the update vectors of the algorithms are stochastic gradients of the respective objectives, that is, the update vectors are unbiased gradient measurements that equal the gradient of the full information objective in expectation. Note that the latter analysis covers the use of constant learning rates. Convergence speed is analyzed in standard stochastic approximation theory in terms of the number of iterations needed to reach an accuracy of ϵ for a gradient-based criterion E[∥∇J(wt)∥2] ≤ϵ, (7) where J(wt) denotes the objective to be minimized. Following Ghadimi and Lan (2012), the iteration complexity of the non-convex objectives underlying our Algorithms 1 and 2 can be given as O(1/ϵ2) (see Sokolov et al. (2016)). Algorithm 3 can be seen as stochastic optimization of a strongly convex objective that is attained by adding an ℓ2 regularizer λ 2∥w∥2 with constant λ > 0 to objective (6). In the standard stochastic approximation theory, the iteration complexity of stochastic gradient algorithms using decreasing learning rates can be given as O(1/ϵ) for an objective value-based criterion E[J(wt)] −J(w∗) ≤ϵ, where w∗= arg minw J(w) (Polyak, 1987). For constant learning rates, even faster convergence can be shown provided certain additional conditions are met (Solodov, 1998). While the asymptotic iteration complexity bounds predict faster convergence for Algorithm 3 compared to Algorithms 1 and 2, and equal convergence speed for the latter two, Sokolov et al. (2016) show that the hidden constant of variance of the stochastic gradient can offset this advantage empirically. They find smallest variance of stochastic updates and fastest empirical convergence under the gradient-based criterion (7) for Algorithm 2. In the next section we will present experimental results that show similar relations of fastest convergence of Algorithm 2 under a convergence criterion based on task loss evaluation on heldout data. 4 Experiments Experimental design. Our experiments follow an online learning protocol where on each of a sequence of rounds, an output structure is randomly sampled, and feedback to it is used to update the model (Shalev-Shwartz, 2012). We simulate bandit feedback by evaluating ∆against gold standard structures which are never revealed to the learner (Agarwal et al., 2014). Training is started from w0 = 0 or from an out-of-domain model (for SMT). Following the standard practice of early stopping by performance evaluation on a development set, we compute convergence speed as the number of iterations needed to find the point of optimal performance before overfitting on the development set occurs. The convergence criterion is thus based on the respective task loss function ∆(ˆywt(x)) under MAP prediction ˆyw(x) = arg maxy∈Y(x) pw(y|x), microaveraged on the development data. This lets us compare convergence across different objectives, and is justified by the standard practice of performing online-tobatch conversion by early stopping on a development set (Littlestone, 1989), or by tolerant training to avoid overfitting (Solodov, 1998). As a further measure for comparability of convergence 1614 task Algorithm 1 Algorithm 2 Algorithm 3 Text classification γt = 1.0 γt = 10−0.75 γt = 10−1 CRF OCR T0 = 0.4, γt = 10−3.5 T0 = 0.1, γt = 10−4 λ = 10−5, k = 10−2, γt = 10−6 Chunking γt = 10−4 γt = 10−4 λ = 10−6, k = 10−2, γt = 10−6 SMT News (n-best, dense) γt = 10−5 γt = 10−4.75 λ = 10−4, µ = 0.99, γt = 10−6/ √ t News (h-graph, sparse) γt = 10−5 γt = 10−4 λ = 10−6, k = 5 · 10−3, γt = 10−6 Table 1: Metaparameter settings determined on dev sets for constant learning rate γt, temperature coefficient T0 for annealing under the schedule T = T0/ 3√epoch + 1 (Rose, 1998; Arun et al., 2010), momentum coefficient min{1 −1/(t/2 + 2), µ} (Polyak, 1964; Sutskever et al., 2013), clipping constant k used to replace pwt(˜yt|xt) with max{pwt(˜yt|xt), k} in line 7 of Algorithm 3 (Ionides, 2008), ℓ2 regularization constant λ. Unspecified parameters are set to zero. speeds across algorithms, we employ small constant learning rates in all experiments. The use of constant learning rates for Algorithms 1 and 2 is justified by the analysis of Ghadimi and Lan (2012). For Algorithm 3, the use of constant learning rates effectively compares convergence speed towards an area in close vicinity of a local minimum in the search phase of the algorithm (Bottou, 2004). The development data are also used for metaparameter search. Optimal configurations are listed in Table 1. Final testing was done by computing ∆on a further unseen test set using the model found by online-to-batch conversion. For bandit-type algorithms, final results are averaged over 3 runs with different random seeds. For statistical significance testing of results against baselines we use Approximate Randomization testing (Noreen, 1989). Multiclass classification. Multiclass text classification on the Reuters RCV1 dataset (Lewis et al., 2004) is a standard benchmark for (simplified) structured prediction that has been used in a bandit setup by Kakade et al. (2008). The simplified problem uses a binary ∆function indicating incorrect assignment of one out of 4 classes. Following Kakade et al. (2008), we used documents with exactly one label from the set of labels {CCAT, ECAT, GCAT, MCAT} and converted them to tfidf word vectors of dimension 244,805 in training. The data were split into the sets train (509,381 documents from original test pt[0-2].dat files), dev (19,486 docs: every 8th entry from test pt3.dat and test (19,806 docs from train.dat). As shown in Table 2 (row 1), all loss results are small and comparable since the task is relatively easy. For comparison, the partial information classification algorithm Banditron (Kakade et al., 2008) (after adjusting the exploration/exploitation constant on the dev set) scored 0.047 on the test set. However, our main interest is in convergence speed. Table 3 (row 1) shows that pairwise ranking (Algorithm 2) yields fastest convergence by a factor of 2-4 compared to the other bandit algorithms. Table 1 confirms that this improvement is not attributable to larger learning rates (Algorithm 2 employs a similar or smaller learning rate than Algorithms 1 and 3, respectively.) Sequence labeling for OCR and chunking. Handwritten optical character recognition (OCR) is a standard benchmark task for structured prediction (Taskar et al., 2003), where the Hamming distance between the predicted word and the gold standard labeling (normalized by word length) is assumed as the ∆function. We used their dataset of 6,876 handwritten words, from 150 human subjects, under a split where 5,546 examples (folds 2-9) were used as train set, 704 examples (fold 1) as dev, and 626 (fold 0) as test set. We assumed the classical linear-chain Conditional Random Field (CRF) (Lafferty et al., 2001) model with input images xi at every ith node, tabular state-transition probabilities between 28 possible labels of the (i −1)th and ith node (Latin letters plus two auxiliary start and stop states).3 To test the CRF-based model also with sparse features, we followed Sha and Pereira (2003) in applying CRFs to the noun phrase chunking task 3The feature set is composed of a 16 × 8 binary pixel representation for each character, yielding 28×16×8+282 = 4, 368 features for the training set. We based our code on the pystruct kit (M¨uller and Behnke, 2014). 1615 task gain/loss full information partial information Alg. 1 Alg. 2 Alg. 3 Text classification 0/1 ↓percep., λ = 10−6 0.040 0.0306±0.0004 0.083±0.002 0.035±0.001 CRF OCR (dense) Hamming ↓ likelihood 0.099 0.261±0.003 0.332±0.011 0.257±0.004 Chunking (sparse) F1-score ↑ likelihood 0.935 0.923±0.002 0.914±0.002 0.891±0.005 out-of-domain in-domain Alg. 1 Alg. 2 Alg. 3 SMT News (n-best list, dense) BLEU ↑ 0.2588 0.2841 0.2689±0.0003 0.2745±0.0004 0.2763±0.0005 News (hypergraph, sparse) 0.2651 0.2831 0.2667±0.00008 0.2733±0.0005 0.2713±0.001 Table 2: Test set evaluation for full information lower and upper bounds and partial information bandit learners (expected loss, pairwise loss, cross-entropy). ↑and ↓indicate the direction of improvement for the respective evaluation metric. on the CoNLL-2000 dataset4. We split the original training set into a dev set (top 1,000 sent.) and used the rest as train set (7,936 sent.); the test set was kept intact (2,012 sent.). For an input sentence x, each CRF node xi carries an observable word and its part-of-speech tag, and has to be assigned a chunk tag ci out of 3 labels: Beginning, Inside, or Outside (of a noun phrase). Chunk labels are not nested. As in Sha and Pereira (2003), we use second order Markov dependencies (bigram chunk tags), such that for sentence position i, the state is yi = ci−1ci, increasing the label set size from 3 to 9. Out of the full list of Sha and Pereira (2003)’s features we implemented all except two feature templates, yi = y and c(yi) = c, to simplify implementation. Impossible bigrams (OI) and label transitions of the pattern ⋆O →I⋆were prohibited by setting the respective potentials to −∞. As the active feature count in the train set was just under 2M, we hashed all features and weights into a sparse array of 2M entries. Despite the reduced train size and feature set, and hashing, our full information baseline trained with log-likelihood attained the test F1-score of 0.935, which is comparable to the original result of 0.9438. Table 2 (rows 2-3) and Table 3 (rows 2-3) show evaluation and convergence results for the OCR and chunking tasks. For the chunking task, the F1score results obtained for bandit learning are close to the full-information baseline. For the OCR task, bandit learning does decrease Hamming loss, but it does not quite achieve full-information performance. However, pairwise ranking (Algorithm 2) again converges faster than the alternative bandit algorithms by a factor of 2-4, despite similar learning rates for Algorithms 1 and 2 and a compensa4http://www.cnts.ua.ac.be/conll2000/ chunking/ task Alg. 1 Alg. 2 Alg. 3 Text classification 2.0M 0.5M 1.1M CRF OCR 14.4M 9.3M 37.9M Chunking 7.5M 4.7M 5.9M SMT News (n-best, dense) 3.8M 1.2M 1.2M News (h-graph, sparse) 370k 115k 281k Table 3: Number of iterations required to meet stopping criterion on development data. tion of smaller learning rates in Algorithm 3 by variance reduction and regularization. Discriminative ranking for SMT. Following Sokolov et al. (2015), we apply bandit learning to simulate personalized MT where a given SMT system is adapted to user style and domain based on feedback to predicted translations. We perform French-to-English domain adaptation from Europarl to NewsCommentary domains using the data of Koehn and Schroeder (2007). One difference of our experiment compared to Sokolov et al. (2015) is our use of the SCFG decoder cdec (Dyer et al., 2010) (instead of the phrase-based Moses decoder). Furthermore, in addition to bandit learning for re-ranking on unique 5,000-best lists, we perform ranking on hypergraphs with redecoding after each update. Sampling and computation of expectations on the hypergraph uses the Inside-Outside algorithm over the expectation semiring (Li and Eisner, 2009). The re-ranking model used 15 dense features (6 lexicalized reordering features, two (out-of- and in-domain) language models, 5 translation model features, distortion and word penalty). The hypergraph experiments used additionally lexicalized sparse features: rule-id features, rule source and target bigram features, and rule shape features. 1616 0.25 0.255 0.26 0.265 0.27 0 100000 200000 300000 400000 500000 600000 BLEU on dev #training samples 1) expected-loss 2) pairwise-ranking 3) cross-entropy Figure 1: Learning curves for task loss BLEU on development data for SMT hypergraph re-decoding models, together with averages over three runs of the respective algorithms. For all SMT experiments we tokenized, lowercased and aligned words using cdec tools, trained 4-gram in-domain and out-of-domain language models (on the English sides of Europarl and in-domain NewsCommentary) For dense feature models, the out-of-domain baseline SMT model was trained on 1.6M parallel Europarl data and tuned with cdec’s lattice MERT (Och, 2003) on out-of-domain Europarl dev2006 dev set (2,000 sent.). The full-information in-domain SMT model tuned by MERT on news in-domain sets (nc-dev2007, 1,057 sent.) gives the range of possible improvements by the difference of its BLEU score to the one of the out-of-domain model (2.5 BLEU points). For sparse feature models, in-domain and out-of-domain baselines were trained on the same data using MIRA (Chiang, 2012). The in-domain MIRA model contains 133,531 active features, the out-of-domain MIRA model 214,642. MERT and MIRA runs for both settings were repeated 7 times and median results are reported. Learning under bandit feedback starts at the learned weights of the out-of-domain median models. It uses the parallel in-domain data (news-commentary, 40,444 sent.) to simulate bandit feedback, by evaluating the sampled translation against the reference using as loss function ∆a smoothed per-sentence 1 −BLEU (zero n-gram counts being replaced with 0.01). For pairwise preference learning we use binary feedback resulting from the comparison of the BLEU scores of the sampled translations. To speed up training for hypergraph re-decoding, the training instances were reduced to those with at most 60 words (38,350 sent.). Training is distributed across 38 shards using multitask-based feature selection for sparse models (Simianer et al., 2012), where after each epoch of distributed training, the top 10k features across all shards are selected, all other features are set to zero. The meta-parameters were adjusted on the in-domain dev sets (nc-devtest2007, 1,064 parallel sentences). The final results are obtained on separate in-domain test sets (nc-test2007, 2,007 sentences) by averaging three independent runs for the optimal dev set meta-parameters. The results for n-best re-ranking in Table 2 (4th row) show statistically significant improvements of 1-2 BLEU points over the out-of-domain SMT model (that includes an in-domain language model) for all bandit learning methods, confirming the results of Sokolov et al. (2015) for a different decoder. Similarly, the results for hypergraph re-coding with sparse feature models (row 5 in Table 2) show significant improvements over the out-of-domain baseline for all bandit learners. Table 3 (row 4) shows the convergence speed for nbest re-ranking, which is similar for Algorithms 2 and 3, and improved over Algorithm 1 by a factor of 3. For hypergraph re-decoding, Table 3 (row 5) shows fastest convergence for Algorithm 2 com1617 pared to Algorithms 1 and 3 by a factor of 2-4.5 Again, we note that for both n-best re-ranking and hypergraph re-decoding, learning rates are similar for Algorithms 1 and 2, and smaller learning rates in Algorithm 3 are compensated by variance reduction or regularization. Figure 1 shows the learning curves of BLEU for SMT hypergraph re-decoding on the development set that were used to find the stopping points. For each algorithm, we show learning curves for three runs with different random seeds, together with an average learning curve. We see that Algorithm 2, optimizing the pairwise preference ranking objective, reaches the stopping point of peak performance on development data fastest, followed by Algorithms 1 and 3. Furthermore, the larger variance of the runs of Algorithm 3 is visible, despite the smallest learning rate used. 5 Conclusion We presented objectives and algorithms for structured prediction from bandit feedback, with a focus on improving convergence speed and ease of elicitability of feedback. We investigated the performance of all algorithms by test set performance on different tasks, however, the main interest of this paper was a comparison of convergence speed across different objectives by early stopping on a convergence criterion based on heldout data performance. Our experimental results on different NLP tasks showed a consistent advantage of convergence speed under this criterion for bandit pairwise preference learning. In light of the standard stochastic approximation analysis, which predicts a convergence advantage for strongly convex objectives over convex or non-convex objectives, this result is surprising. However, the result can be explained by considering important empirical factors such as the variance of stochastic updates. Our experimental results support the numerical results of smallest stochastic variance and fastest convergence in gradient norm (Sokolov et al., 2016) by consistent fastest empirical convergence for bandit pairwise preference learning under the criterion of early stopping on heldout data performance. Given the advantages of faster convergence and the fact that only relative feedback in terms of comparative evaluations is required, bandit pair5The faster convergence speed hypergraph re-decoding compared to n-best re-ranking is due to the distributed feature selection and thus orthogonal to the comparison of objective functions that is of interest here. wise preference learning is a promising framework for future real-world interactive learning. Acknowledgments This research was supported in part by the German research foundation (DFG), and in part by a research cooperation grant with the Amazon Development Center Germany. References Alekh Agarwal, Ofer Dekel, and Liu Xiao. 2010. Optimal algorithms for online convex optimization with multi-point bandit feedback. In COLT, Haifa, Israel. Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire. 2014. Taming the monster: A fast and simple algorithm for contextual bandits. In ICML, Beijing, China. Abhishek Arun, Barry Haddow, and Philipp Koehn. 2010. A unified approach to minimum risk training and decoding. In Workshop on SMT and Metrics (MATR), Uppsala, Sweden. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. 2002a. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256. Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. 2002b. The nonstochastic multiarmed bandit problem. SIAM J. on Computing, 32(1):48–77. Nicola Bertoldi, Patrick Simianer, Mauro Cettolo, Katharina W¨aschle, Marcello Federico, and Stefan Riezler. 2014. Online adaptation to post-edits for phrase-based statistical machine translation. Machine Translation, 29:309–339. Dimitri P. Bertsekas and John N. Tsitsiklis. 1996. Neuro-Dynamic Programming. Athena Scientific. L´eon Bottou. 2004. Stochastic learning. In Olivier Bousquet, Ulrike von Luxburg, and Gunnar R¨atsch, editors, Advanced Lectures on Machine Learning, pages 146–168. Springer, Berlin. S.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In ACL, Suntec, Singapore. S´ebastian Bubeck and Nicol`o Cesa-Bianchi. 2012. Regret analysis of stochastic and nonstochastic multiarmed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122. Nicol`o Cesa-Bianchi and G´abor Lugosi. 2012. Combinatorial bandits. J. of Computer and System Sciences, 78:1401–1422. 1618 Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, and John Langford. 2015. Learning to search better than your teacher. In ICML, Lille, France. Olivier Chapelle, Eren Masnavoglu, and Romer Rosales. 2014. Simple and scalable response prediction for display advertising. ACM Trans. on Intelligent Systems and Technology, 5(4). David Chiang. 2012. Hope and fear for discriminative training of statistical translation models. JMLR, 12:1159–1187. Corinna Cortes, Mehryar Mohri, and Asish Rastogi. 2007. Magnitude-preserving ranking algorithms. In ICML, Corvallis, OR. Varsha Dani, Thomas P. Hayes, and Sham M. Kakade. 2007. The price of bandit information for online optimization. In NIPS, Vancouver, Canada. Michael Denkowski, Chris Dyer, and Alon Lavie. 2014. Learning from post-editing: Online model adaptation for statistical machine translation. In EACL, Gothenburg, Sweden. John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Andre Wibisono. 2015. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Translactions on Information Theory, 61(5):2788–2806. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In ACL Demo, Uppsala, Sweden. Yoav Freund, Ray Iyer, Robert E. Schapire, and Yoram Singer. 2003. An efficient boosting algorithm for combining preferences. JMLR, 4:933–969. Johannes F¨urnkranz and Eyke H¨ullermeier. 2010. Preference learning and ranking by pairwise comparison. In Johannes F¨urnkranz and Eyke H¨ullermeier, editors, Preference Learning. Springer. Saeed Ghadimi and Guanghui Lan. 2012. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM J. on Optimization, 4(23):2342–2368. Kevin Gimpel and Noah A. Smith. 2010. Softmaxmargin training for structured log-linear models. Technical Report CMU-LTI-10-008, Carnegie Mellon University, Pittsburgh, PA. Spence Green, Sida I. Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D. Manning. 2014. Human effort and machine learnability in computer aided translation. In EMNLP, Doha, Qatar. Xiaodong He and Li Deng. 2012. Maximum expected BLEU training of phrase and lexicon translation models. In ACL, Jeju Island, Korea. Ralf Herbrich, Thore Graepel, and Klaus Obermayer. 2000. Large margin rank boundaries for ordinal regression. In Advances in Large Margin Classifiers, pages 115–132. Cambridge, MA. Edward L. Ionides. 2008. Truncated importance sampling. J. of Comp. and Graph. Stat., 17(2):295–311. Kevin G. Jamieson, Robert D. Nowak, and Benjamin Recht. 2012. Query complexity of derivative-free optimization. In NIPS, Lake Tahoe, CA. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In KDD, New York, NY. Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. 2008. Efficient bandit algorithms for online multiclass prediction. In ICML, Helsinki, Finland. Philipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine translation. In WMT, Prague, Czech Republic. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, San Francisco, CA. John Langford and Tong Zhang. 2007. The epochgreedy algorithm for contextual multi-armed bandits. In NIPS, Vancouver, Canada. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. RCV1: A new benchmark collection for text categorization research. JMLR, 5:361–397. Zhifei Li and Jason Eisner. 2009. First-and secondorder expectation semirings with applications to minimum-risk training on translation forests. In EMNLP, Edinburgh, UK. Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In WWW, Raleigh, NC. Nick Littlestone. 1989. From on-line to batch learning. In COLT, Santa Cruz, CA. Andreas C. M¨uller and Sven Behnke. 2014. pystruct - learning structured prediction in python. JMLR, 15:2055–2060. Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses. An Introduction. Wiley, New York. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In HLT-NAACL, Edmonton, Canada. Boris T. Polyak and Yakov Z. Tsypkin. 1973. Pseudogradient adaptation and training algorithms. Automation and remote control, 34(3):377–397. Boris T. Polyak. 1964. Some methods of speeding up the convergence of iteration methods. USSR Comp. Math. and Math. Phys., 4(5):1–17. 1619 Boris T. Polyak. 1987. Introduction to Optimization. Optimization Software, Inc., New York. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR, San Juan, Puerto Rico. Kenneth Rose. 1998. Deterministic annealing for clustering, compression, classification, regression and related optimization problems. IEEE, 86(11). Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In NAACL, Edmonton, Canada. Shai Shalev-Shwartz. 2012. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194. Patrick Simianer, Stefan Riezler, and Chris Dyer. 2012. Joint feature selection in distributed stochastic learning for large-scale discriminative training in SMT. In ACL, Jeju Island, Korea. David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In COLING-ACL, Sydney, Australia. Artem Sokolov, Stefan Riezler, and Tanguy Urvoy. 2015. Bandit structured prediction for learning from user feedback in statistical machine translation. In MT Summit XV, Miami, FL. Artem Sokolov, Julia Kreutzer, and Stefan Riezler. 2016. Stochastic structured prediction under bandit feedback. CoRR, abs/1606.00739. Mikhail V. Solodov. 1998. Incremental gradient algorithms with stepsizes bounded away from zero. Computational Optimization and Applications, 11:23–35. Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. 2013. On the importance of initialization and momentum in deep learning. In ICML, Atlanta, GA. Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In NIPS, Vancouver, Canada. Csaba Szepesv´ari. 2009. Algorithms for Reinforcement Learning. Morgan & Claypool. Ben Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max-margin markov networks. In NIPS, Vancouver, Canada. Louis Leon Thurstone. 1927. A law of comparative judgement. Psychological Review, 34:278–286. Yisong Yue and Thorsten Joachims. 2009. Interactively optimizing information retrieval systems as a dueling bandits problem. In ICML, Montreal, Canada. Alan Yuille and Xuming He. 2012. Probabilistic models of vision and max-margin methods. Frontiers of Electrical and Electronic Engineering, 7(1):94–106. 1620
2016
152
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1621–1630, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Deep Reinforcement Learning with a Natural Language Action Space Ji He∗, Jianshu Chen†, Xiaodong He†, Jianfeng Gao†, Lihong Li† Li Deng† and Mari Ostendorf∗ ∗Department of Electrical Engineering, University of Washington, Seattle, WA 98195, USA {jvking, ostendor}@uw.edu †Microsoft Research, Redmond, WA 98052, USA {jianshuc, xiaohe, jfgao, lihongli, deng}@microsoft.com Abstract This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Qlearning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text. 1 Introduction This work is concerned with learning strategies for sequential decision-making tasks, where a system takes actions at a particular state with the goal of maximizing a long-term reward. More specifically, we consider tasks where both the states and the actions are characterized by natural language, such as in human-computer dialog systems, tutoring systems, or text-based games. In a text-based game, for example, the player (or system, in this case) is given a text string that describes the current state of the game and several text strings that describe possible actions one could take. After selecting one of the actions, the environment state is updated and revealed in a new textual description. A reward is given either at each transition or in the end. The objective is to understand, at each step, the state text and all the action texts to pick the most relevant action, navigating through the sequence of texts so as to obtain the highest longterm reward. Here the notion of relevance is based on the joint state/action impact on the reward: an action text string is said to be “more relevant” (to a state text string) than the other action texts if taking that action would lead to a higher longterm reward. Because a player’s action changes the environment, reinforcement learning (Sutton and Barto, 1998) is appropriate for modeling longterm dependency in text games. There is a large body of work on reinforcement learning. Of most interest here are approaches leveraging neural networks because of their success in handling a large state space. Early work — TD-gammon — used a neural network to approximate the state value function (Tesauro, 1995). Recently, inspired by advances in deep learning (LeCun et al., 2015; Hinton et al., 2012; Krizhevsky et al., 2012; Dahl et al., 2012), significant progress has been made by combining deep learning with reinforcement learning. Building on the approach of Q-learning (Watkins and Dayan, 1992), the “Deep Q-Network” (DQN) was developed and applied to Atari games (Mnih et al., 2013; Mnih et al., 2015) and shown to achieve human level performance by applying convolutional neural networks to the raw image pixels. Narasimhan et al. (2015) applied a Long Short-Term Memory network to characterize the state space in a DQN framework for learning control policies for parserbased text games. More recently, Nogueira and Cho (2016) have also proposed a goal-driven web navigation task for language based sequential decision making study. Another stream of work focuses on continuous control with deep reinforcement learning (Lillicrap et al., 2016), where an actor-critic algorithm operates over a known continuous action space. Inspired by these successes and recent work using neural networks to learn phrase- or sentence1621 level embeddings (Collobert and Weston, 2008; Huang et al., 2013; Le and Mikolov, 2014; Sutskever et al., 2014; Kiros et al., 2015), we propose a novel deep architecture for text understanding, which we call a deep reinforcement relevance network (DRRN). The DRRN uses separate deep neural networks to map state and action text strings into embedding vectors, from which “relevance” is measured numerically by a general interaction function, such as their inner product. The output of this interaction function defines the value of the Q-function for the current state-action pair, which characterizes the optimal long-term reward for pairing these two text strings. The Q-function approximation is learned in an end-to-end manner by Q-learning. The DRRN differs from prior work in that earlier studies mostly considered action spaces that are bounded and known. For actions described by natural language text strings, the action space is inherently discrete and potentially unbounded due to the exponential complexity of language with respect to sentence length. A distinguishing aspect of the DRRN architecture — compared to simple DQN extensions — is that two different types of meaning representations are learned, reflecting the tendency for state texts to describe scenes and action texts to describe potential actions from the user. We show that the DRRN learns a continuous space representation of actions that successfully generalize to paraphrased descriptions of actions unseen in training. 2 Deep Reinforcement Relevance Network 2.1 Text Games and Q-learning We consider the sequential decision making problem for text understanding. At each time step t, the agent will receive a string of text that describes the state st (i.e., “state-text”) and several strings of text that describe all the potential actions at (i.e., “action-text”). The agent attempts to understand the texts from both the state side and the action side, measuring their relevance to the current context st for the purpose of maximizing the long-term reward, and then picking the best action. Then, the environment state is updated st+1 = s′ according to the probability p(s′|s, a), and the agent receives a reward rt for that particular transition. The policy of the agent is defined to be the probability π(at|st) of taking action at at state st. Define the Q-function Qπ(s, a) as the expected return starting from s, taking the action a, and thereafter following policy π(a|s) to be: Qπ(s, a) = E (+∞ X k=0 γkrt+k st = s, at = a ) where γ denotes a discount factor. The optimal policy and Q-function can be found by using the Q-learning algorithm (Watkins and Dayan, 1992): Q(st, at) ←Q(st, at)+ (1) ηt · rt + γ · max a Q(st+1, a) −Q(st, at)  where ηt is the learning rate of the algorithm. In this paper, we use a softmax selection strategy as the exploration policy during the learning stage, which chooses the action at at state st according to the following probability: π(at = ai t|st) = exp(α · Q(st, ai t)) P|At| j=1 exp(α · Q(st, aj t)) , (2) where At is the set of feasible actions at state st, ai t is the i-th feasible action in At, | · | denotes the cardinality of the set, and α is the scaling factor in the softmax operation. α is kept constant throughout the learning period. All methods are initialized with small random weights, so initial Q-value differences will be small, thus making the Q-learning algorithm more explorative initially. As Q-values better approximate the true values, a reasonable α will make action selection put high probability on the optimal action (exploitation), but still maintain a small exploration probability. 2.2 Natural language action space Let S denote the state space, and let A denote the entire action space that includes all the unique actions over time. A vanilla Q-learning recursion (1) needs to maintain a table of size |S| × |A|, which is problematic for a large state/action space. Prior work using a DNN in Q-function approximation has shown high capacity and scalability for handling a large state space, but most studies have used a network that generates |A| outputs, each of which represents the value of Q(s, a) for a particular action a. It is not practical to have a DQN architecture of a size that is explicitly dependence on the large number of natural language actions. Further, in many text games, the feasible action set At at each time t is an unknown subset of the unbounded action space A that varies over time. 1622 For the case where the maximum number of possible actions at any point in time (maxt |At|) is known, the DQN can be modified to simply use that number of outputs (“Max-action DQN”), as illustrated in Figure 1(a), where the state and action vectors are concatenated (i.e., as an extended state vector) as its input. The network computes the Q-function values for the actions in the current feasible set as its outputs. For a complex game, maxt |At| may be difficult to obtain, because At is usually unknown beforehand. Nevertheless, we will use this modified DQN as a baseline. An alternative approach is to use a function approximation using a neural network that takes a state-action pair as input, and outputs a single Qvalue for each possible action (“Per-action DQN” in Figure 1(b)). This architecture easily handles a varying number of actions and represents a second baseline. We propose an alternative architecture for handling a natural language action space in sequential text understanding: the deep reinforcement relevance network (DRRN). As shown in Figure 1(c), the DRRN consists of a pair of DNNs, one for the state text embedding and the other for action text embeddings, which are combined using a pairwise interaction function. The texts used to describe states and actions could be very different in nature, e.g., a state text could be long, containing sentences with complex linguistic structure, whereas an action text could be very concise or just a verb phrase. Therefore, it is desirable to use two networks with different structures to handle state/action texts, respectively. As we will see in the experimental sections, by using two separate deep neural networks for state and action sides, we obtain much better results. 2.3 DRRN architecture: Forward activation Given any state/action text pair (st, ai t), the DRRN estimates the Q-function Q(st, ai t) in two steps. First, map both st and ai t to their embedding vectors using the corresponding DNNs, respectively. Second, approximate Q(st, ai t) using an interaction function such as the inner product of the embedding vectors. Then, given a particular state st, we can select the optimal action at among the set of actions via at = arg maxai t Q(st, ai t). More formally, let hl,s and hl,a denote the l-th hidden layer for state and action side neural networks, respectively. For the state side, Wl,s and bl,s denote the linear transformation weight matrix and bias vector between the (l −1)-th and l-th hidden layers. Wl,a and bl,a denote the equivalent parameters for the action side. In this study, the DRRN has L hidden layers on each side. h1,s = f(W1,sst + b1,s) (3) hi 1,a = f(W1,aai t + b1,a) (4) hl,s = f(Wl−1,shl−1,s + bl−1,s) (5) hi l,a = f(Wl−1,ahi l−1,a + bl−1,a) (6) where f(·) is the nonlinear activation function at the hidden layers, which, for example, could be chosen as tanh (x), and i = 1, 2, 3, ..., |At| is the action index. A general interaction function g(·) is used to approximate the Q-function values, Q(s, a), in the following parametric form: Q(s, ai; Θ) = g hL,s, hi L,a  (7) where Θ denotes all the model parameters. The interaction function could be an inner product, a bilinear operation, or a nonlinear function such as a deep neural network. In our experiments, the inner product and bilinear operation gave similar results. For simplicity, we present our experiments mostly using the inner product interaction function. The success of the DRRN in handling a natural language action space A lies in the fact that the state-text and the action-texts are mapped into separate finite-dimensional embedding spaces. The end-to-end learning process (discussed next) makes the embedding vectors in the two spaces more aligned for “good” (or relevant) action texts compared to “bad” (or irrelevant) choices, resulting in a higher interaction function output (Qfunction value). 2.4 Learning the DRRN: Back propagation To learn the DRRN, we use the “experiencereplay” strategy (Lin, 1993), which uses a fixed exploration policy to interact with the environment to obtain a sample trajectory. Then, we randomly sample a transition tuple (sk, ak, rk, sk+1), compute the temporal difference error for sample k: dk = rk+γ max a Q(sk+1, a; Θk−1)−Q(sk, ak; Θk−1), and update the model according to the recursions: Wv,k = Wv,k−1 + ηkdk · ∂Q(sk, ak; Θk−1) ∂Wv (8) bv,k = bv,k−1 + ηkdk · ∂Q(sk, ak; Θk−1) ∂bv (9) 1623 𝑠௧ 𝑎௧ ଵ ℎଵ ℎଶ 𝑄௧(𝑠, 𝑎ଵ) 𝑎௧ ଶ 𝑄௧(𝑠, 𝑎ଶ) (a) Max-action DQN 𝑠௧ 𝑎௧ ௜ ℎଵ ℎଶ 𝑠௧ 𝑎௧ ଵ ℎଵ ℎଶ 𝑄௧(𝑠, 𝑎ଵ) 𝑎௧ ଶ 𝑄௧(𝑠, 𝑎ଶ) 𝑄௧(𝑠, 𝑎௜) (b) Per-action DQN 𝑠௧ pairwise interaction function (e.g. inner product) 𝑎௧ ௜ ℎଵ,௦ ℎଶ,௦ 𝑄௧(𝑠, 𝑎௜) ℎଶ,௔ ௜ ℎଵ,௔ ௜ (c) DRRN Figure 1: Different deep Q-learning architectures: Max-action DQN and Per-action DQN both treat input text as concantenated vectors and compute output Q-values with a single NN. DRRN models text embeddings from state/action sides separately, and use an interaction function to compute Q-values. Figure 2: PCA projections of text embedding vectors for state and associated action vectors after 200, 400 and 600 training episodes. The state is “As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street.” Action 1 (good choice) is “Look up”, and action 2 (poor choice) is “Ignore the alarm of others and continue moving forward.” for v ∈{s, a}. Expressions for ∂Q ∂Wv , ∂Q ∂bv and other algorithm details are given in supplementary materials. Random sampling essentially scrambles the trajectory from experience-replay into a “bag-of-transitions”, which has been shown to avoid oscillations or divergence and achieve faster convergence in Q-learning (Mnih et al., 2015). Since the models on the action side share the same parameters, models associated with all actions are effectively updated even though the back propagation is only over one action. We apply back propagation to learn how to pair the text strings from the reward signals in an end-to-end manner. The representation vectors for the state-text and the action-text are automatically learned to be aligned with each other in the text embedding space from the reward signals. A summary of the full learning algorithm is given in Algorithm 1. Figure 2 illustrates learning with an inner product interaction function. We used Principal Component Analysis (PCA) to project the 100dimension last hidden layer representation (before the inner product) to a 2-D plane. The vector embeddings start with small values, and after 600 episodes of experience-replay training, the embeddings are very close to the converged embedding (4000 episodes). The embedding vector of the optimal action (Action 1) converges to a positive inner product with the state embedding vector, while Action 2 converges to a negative inner product. 3 Experimental Results 3.1 Text games Text games, although simple compared to video games, still enjoy high popularity in online communities, with annual competitions held online 1624 Algorithm 1 Learning algorithm for DRRN 1: Initialize replay memory D to capacity N. 2: Initialize DRRN with small random weights. 3: Initialize game simulator and load dictionary. 4: for episode = 1, . . . , M do 5: Restart game simulator. 6: Read raw state text and a list of action text from the simulator, and convert them to representation s1 and a1 1, a2 1, . . . , a|A1| 1 . 7: for t = 1, . . . , T do 8: Compute Q(st, ai t; Θ) for the list of actions using DRRN forward activation (Section 2.3). 9: Select an action at based on probability distribution π(at = ai t|st) (Equation 2) 10: Execute action at in simulator 11: Observe reward rt. Read the next state text and the next list of action texts, and convert them to representation st+1 and a1 t+1, a2 t+1, . . . , a|At+1| t+1 . 12: Store transition (st, at, rt, st+1, At+1) in D. 13: Sample random mini batch of transitions (sk, ak, rk, sk+1, Ak+1) from D. 14: Set yk = ( rk if sk+1 is terminal rk + γ maxa′∈Ak+1 Q(sk+1, a′; Θ)) otherwise 15: Perform a gradient descent step on (yk −Q(sk, ak; Θ))2 with respect to the network parameters Θ (Section 2.4). Back-propagation is performed only for ak even though there are |Ak| actions at time k. 16: end for 17: end for since 1995. Text games communicate to players in the form of a text display, which players have to understand and respond to by typing or clicking text (Adams, 2014). There are three types of text games: parser-based (Figure 3(a)), choicebased (Figure 3(b)), and hypertext-based (Figure 3(c)). Parser-based games accept typed-in commands from the player, usually in the form of verb phrases, such as “eat apple”, “get key”, or “go east”. They involve the least complex action language. Choice-based and hypertext-based games present actions after or embedded within the state text. The player chooses an action, and the story continues based on the action taken at this particular state. With the development of web browsing and richer HTML display, choice-based and hypertext-based text games have become more popular, increasing in percentage from 8% in 2010 to 62% in 2014.1 For parser-based text games, Narasimhan et al. (2015) have defined a fixed set of 222 actions, which is the total number of possible phrases the parser accepts. Thus the parser-based text game is reduced to a problem that is well suited to a fixed1Statistics obtained from http://www.ifarchive. org Game Saving John Machine of Death Text game type Choice Choice & Hypertext Vocab size 1762 2258 Action vocab size 171 419 Avg. words/description 76.67 67.80 State transitions Deterministic Stochastic # of states (underlying) ≥70 ≥200 Table 1: Statistics for the games “Saving John” and and “Machine of Death”. action-set DQN. However, for choice-based and hypertext-based text games, the size of the action space could be exponential with the length of the action sentences, which is handled here by using a continuous representation of the action space. In this study, we evaluate the DRRN with two games: a deterministic text game task called “Saving John” and a larger-scale stochastic text game called “Machine of Death” from a public archive.2 The basic text statistics of these tasks are shown in Table 1. The maximum value of feasible actions (i.e., maxt |At|) is four in “Saving John”, and nine in “Machine of Death”. We manually annotate fi2Simulators are available at https://github.com/ jvking/text-games 1625 (a) Parser-based (b) Choiced-based (c) Hypertext-based Figure 3: Different types of text games nal rewards for all distinct endings in both games (as shown in supplementary materials). The magnitude of reward scores are given to describe sentiment polarity of good/bad endings. On the other hand, each non-terminating step we assign with a small negative reward, to encourage the learner to finish the game as soon as possible. For the text game “Machine of Death”, we restrict an episode to be no longer than 500 steps. In “Saving John” all actions are choice-based, for which the mapping from text strings to at are clear. In “Machine of Death”, when actions are hypertext, the actions are substrings of the state. In this case st is associated with the full state description, and at are given by the substrings without any surrounding context. For text input, we use raw bag-of-words as features, with different vocabularies for the state side and action side. 3.2 Experiment setup We apply DRRNs with both 1 and 2 hidden layer structures. In most experiments, we use dotproduct as the interaction function and set the hidden dimension to be the same for each hidden layer. We use DRRNs with 20, 50 and 100-dimension hidden layer(s) and build learning curves during experience-replay training. The learning rate is constant: ηt = 0.001. In testing, as in training, we apply softmax selection. We record average final rewards as performance of the model. The DRRN is compared to multiple baselines: a linear model, two max-action DQNs (MA DQN) (L = 1 or 2 hidden layers), and two per-action DQNs (PA DQN) (again, L = 1, 2). All baselines use the same Q-learning framework with different function approximators to predict Q(st, at) given the current state and actions. For the linear and MA DQN baselines, the input is the textbased state and action descriptions, each as a bag of words, with the number of outputs equal to the maximum number of actions. When there are fewer actions than the maximum, the highest scoring available action is used. The PA DQN baseline Eval metric Average reward hidden dimension 20 50 100 Linear 4.4 (0.4) PA DQN (L = 1) 2.0 (1.5) 4.0 (1.4) 4.4 (2.0) PA DQN (L = 2) 1.5 (3.0) 4.5 (2.5) 7.9 (3.0) MA DQN (L = 1) 2.9 (3.1) 4.0 (4.2) 5.9 (2.5) MA DQN (L = 2) 4.9 (3.2) 9.0 (3.2) 7.1 (3.1) DRRN (L = 1) 17.1 (0.6) 18.3 (0.2) 18.2 (0.2) DRRN (L = 2) 18.4 (0.1) 18.5 (0.3) 18.7 (0.4) Table 2: The final average rewards and standard deviations on “Saving John”. takes each pair of state-action texts as input, and generates a corresponding Q-value. We use softmax selection, which is widely applied in practice, to trade-off exploration vs. exploitation. Specifically, for each experiencereplay, we first generate 200 episodes of data (about 3K tuples in “Saving John” and 16K tuples in “Machine of Death”) using the softmax selection rule in (2), where we set α = 0.2 for the first game and α = 1.0 for the second game. The α is picked according to an estimation of range of the optimal Q-values. We then shuffle the generated data tuples (st, at, rt, st+1) update the model as described in Section 2.4. The model is trained with multiple epochs for all configurations, and is evaluated after each experience-replay. The discount factor γ is set to 0.9. For DRRN and all baselines, network weights are initialized with small random values. To prevent algorithms from “remembering” state-action ordering and make choices based on action wording, each time the algorithm/player reads text from the simulator, we randomly shuffle the list of actions.3 This will encourage the algorithms to make decisions based on the understanding of the texts that describe the states and actions. 3.3 Performance In Figure 4, we show the learning curves of different models, where the dimension of the hid3When in a specific state, the simulator presents the possible set of actions in random order, i.e. they may appear in a different order the next time a player is in this same state. 1626 0 500 1000 1500 2000 2500 3000 3500 Number of episodes -10 -5 0 5 10 15 20 Average reward DRRN (2-hidden) DRRN (1-hidden) PA DQN (2-hidden) MA DQN (2-hidden) (a) Game 1: “Saving John” 0 500 1000 1500 2000 2500 3000 3500 4000 Number of episodes -15 -10 -5 0 5 10 15 Average reward DRRN (2-hidden) DRRN (1-hidden) PA DQN (2-hidden) MA DQN (2-hidden) (b) Game 2: “Machine of Death” Figure 4: Learning curves of the two text games. Eval metric Average reward hidden dimension 20 50 100 Linear 3.3 (1.0) PA DQN (L = 1) 0.9 (2.4) 2.3 (0.9) 3.1 (1.3) PA DQN (L = 2) 1.3 (1.2) 2.3 (1.6) 3.4 (1.7) MA DQN (L = 1) 2.0 (1.2) 3.7 (1.6) 4.8 (2.9) MA DQN (L = 2) 2.8 (0.9) 4.3 (0.9) 5.2 (1.2) DRRN (L = 1) 7.2 (1.5) 8.4 (1.3) 8.7 (0.9) DRRN (L = 2) 9.2 (2.1) 10.7 (2.7) 11.2 (0.6) Table 3: The final average rewards and standard deviations on “Machine of Death”. den layers in the DQNs and DRRN are all set to 100. The error bars are obtained by running 5 independent experiments. The proposed methods and baselines all start at about the same performance (roughly -7 average rewards for Game 1, and roughly -8 average rewards for Game 2), which is the random guess policy. After around 4000 episodes of experience-replay training, all methods converge. The DRRN converges much faster than the other three baselines and achieves a higher average reward. We hypothesize this is because the DRRN architecture is better at capturing relevance between state text and action text. The faster convergence for “Saving John” may be due to the smaller observation space and/or the deterministic nature of its state transitions (in contrast to the stochastic transitions in the other game). The final performance (at convergence) for both baselines and proposed methods are shown in Tables 2 and 3. We test for different model sizes with 20, 50, and 100 dimensions in the hidden layers. The DRRN performs consistently better than all baselines, and often with a lower variance. For Game 2, due to the complexity of the underlying state transition function, we cannot compute the exact optimal policy score. To provide more insight into the performance, we averaged scores of 8 human players for initial trials (novice) and after gaining experience, yielding scores of −5.5 and 16.0, respectively. The experienced players do outperform our algorithm. The converged performance is higher with two hidden layers for all models. However, deep models also converge more slowly than their 1 hidden layer versions, as shown for the DRRN in Figure 4. Besides an inner-product, we also experimented with more complex interaction functions: a) a bilinear operation with different action side dimensions; and b) a non-linear deep neural network using the concatenated state and action space embeddings as input and trained in an end-to-end fashion to predict Q values. For different configurations, we fix the state side embedding to be 100 dimensions and vary the action side embedding dimensions. The bilinear operation gave similar results, but the concatenation input to a DNN degraded performance. Similar behaviors have been observed on a different task (Luong et al., 2015). 3.4 Actions with paraphrased descriptions To investigate how our models handle actions with “unseen” natural language descriptions, we had two people paraphrase all actions in the game “Machine of Death” (used in testing phase), except a few single-word actions whose synonyms are out-of-vocabulary (OOV). The wordlevel OOV rate of paraphrased actions is 18.6%, 1627 Figure 5: Scatterplot and strong correlation between Q-values of paraphrased actions versus original actions and standard 4-gram BLEU score between the paraphrased and original actions is 0.325. The resulting 153 paraphrased action descriptions are associated with 532 unique state-action pairs. We apply a well-trained 2-layer DRRN model (with hidden dimension 100), and predict Qvalues for each state-action pair with fixed model parameters. Figure 5 shows the correlation between Q-values associated with paraphrased actions versus original actions. The predictive Rsquared is 0.95, showing a strong positive correlation. We also run Q-value correlation for the NN interaction and pR2 = 0.90. For baseline MA-DQN and PA-DQN, their corresponding pR2 is 0.84 and 0.97, indicating they also have some generalization ability. This is confirmed in the paraphrasing-based experiments too, where the test reward on the paraphrased setup is close to the original setup. This supports the claim that deep learning is useful in general for this language understanding task, and our findings show that a decoupled architecture most effectively leverages that approach. In Table 4 we provide examples with predicted Q-values of original descriptions and paraphrased descriptions. We also include alternative action descriptions with in-vocabulary words that will lead to positive / negative / irrelevant game development at that particular state. Table 4 shows actions that are more likely to result in good endings are predicted with high Q-values. This indicates that the DRRN has some generalization ability and gains a useful level of language understanding in the game scenario. We use the baseline models and proposed DRRN model trained with the original action descriptions for “Machine of Death”, and test on paraphrased action descriptions. For this game, the underlying state transition mechanism has not changed. The only change to the game interface is that during testing, every time the player reads the actions from the game simulator, it reads the paraphrased descriptions and performs selection based on these paraphrases. Since the texts in test time are “unseen” to the player, a good model needs to have some level of language understanding, while a naive model that memorizes all unique action texts in the original game will do poorly. The results for these models are shown in Table 5. All methods have a slightly lower average reward in this setting (10.5 vs. 11.2 for the original actions), but the DRRN still gives a high reward and significantly outperforms other methods. This shows that the DRRN can generalize well to “unseen” natural language descriptions of actions. 4 Related Work There has been increasing interest in applying deep reinforcement learning to a variety problems, but only a few studies address problems with natural language state or action spaces. In language processing, reinforcement learning has been applied to a dialogue management system that converses with a human user by taking actions that generate natural language (Scheffler and Young, 2002; Young et al., 2013). There has also been interest in extracting textual knowledge to improve game control performance (Branavan et al., 2011), and mapping text instructions to sequences of executable actions (Branavan et al., 2009). In some applications, it is possible to manually design features for state-action pairs, which are then used in reinforcement learning to learn a near-optimal policy (Li et al., 2009). Designing such features, however, require substantial domain knowledge. The work most closely related to our study inolves application of deep reinforcement to learning decision policies for parser-based text games. Narasimhan et al. (2015) applied a Long ShortTerm Memory DQN framework, which achieves higher average reward than the random and Bagof-Words DQN baselines. In this work, actions are constrained to a set of known fixed command structures (one action and one argument object), 1628 Text (with predicted Q-values) State As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Actions in the original game Ignore the alarm of others and continue moving forward. (-21.5) Look up. (16.6) Paraphrased actions (not original) Disregard the caution of others and keep pushing ahead. (-11.9) Turn up and look. (17.5) Positive actions (not original) Stay there. (2.8) Stay calmly. (2.0) Negative actions (not original) Screw it. I’m going carefully. (-17.4) Yell at everyone. (-13.5) Irrelevant actions (not original) Insert a coin. (-1.4) Throw a coin to the ground. (-3.6) Table 4: Predicted Q-value examples Eval metric Average reward hidden dimension 20 50 100 PA DQN (L = 2) 0.2 (1.2) 2.6 (1.0) 3.6 (0.3) MA DQN (L=2) 2.5 (1.3) 4.0 (0.9) 5.1 (1.1) DRRN (L = 2) 7.3 (0.7) 8.3 (0.7) 10.5 (0.9) Table 5: The final average rewards and standard deviations on paraphrased game “Machine of Death”. based on a limited action-side vocabulary size. The overall action space is defined by the actionargument product space. This pre-specified product space is not feasible for the more complex text strings in other forms of text-based games. Our proposed DRRN, on the other hand, can handle the more complex text strings, as well as parserbased games. In preliminary experiments with the parser-based game from (Narasimhan et al., 2015), we find that the DRRN using a bag-of-words (BOW) input achieves results on par with their BOW DQN. The main advantage of the DRRN is that it can also handle actions described with more complex language. The DRRN experiments described here leverage only a simple bag-of-words representation of phrases and sentences. As observed in (Narasimhan et al., 2015), more complex sentence-based models can give further improvements. In preliminary experiments with “Machine of Death”, we did not find LSTMs to give improved performance, but we conjecture that they would be useful in larger-scale tasks, or when the word embeddings are initialized by training on large data sets. As mentioned earlier, other work has applied deep reinforcement learning to a problem with a continuous action space (Lillicrap et al., 2016). In the DRRN, the action space is inherently discrete, but we learn a continuous representation of it. As indicated by the paraphrasing experiment, the continuous space representation seems to generalize reasonably well. 5 Conclusion In this paper we develop a deep reinforcement relevance network, a novel DNN architecture for handling actions described by natural language in decision-making tasks such as text games. We show that the DRRN converges faster and to a better solution for Q-learning than alternative architectures that do not use separate embeddings for the state and action spaces. Future work includes: (i) adding an attention model to robustly analyze which part of state/actions text correspond to strategic planning, and (ii) applying the proposed methods to more complex text games or other tasks with actions defined through natural language. Acknowledgments We thank Karthik Narasimhan and Tejas Kulkarni for providing instructions on setting up their parser-based games. References E. Adams. 2014. Fundamentals of game design. Pearson Education. S.R.K. Branavan, H. Chen, L. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proc. of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pages 82–90, August. S.R.K. Branavan, D. Silver, and R. Barzilay. 2011. Learning to win by reading manuals in a monte-carlo framework. In Proc. of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 268–277. Association for Computational Linguistics. R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of the 25th International Conference on Machine learning, pages 160–167. ACM. 1629 G. E Dahl, D. Yu, L. Deng, and A. Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):30–42. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82–97. P-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proc. of the ACM International Conference on Information & Knowledge Management, pages 2333– 2338. ACM. R. Kiros, Y. Zhu, R. R Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. 2015. Skipthought vectors. In Advances in Neural Information Processing Systems, pages 3276–3284. A. Krizhevsky, I. Sutskever, and G. E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105. Q. V Le and T. Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning. Y. LeCun, Y. Bengio, and G. Hinton. 2015. Deep learning. Nature, 521(7553):436–444. L. Li, J. D. Williams, and S. Balakrishnan. 2009. Reinforcement learning for spoken dialog management using least-squares policy iteration and fast feature selection. In Proceedings of the Tenth Annual Conference of the International Speech Communication Association (INTERSPEECH-09), page 24752478. T. P Lillicrap, J. J Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. 2016. Continuous control with deep reinforcement learning. In International Conference on Learning Representations. L-J. Lin. 1993. Reinforcement learning for robots using neural networks. Technical report, DTIC Document. M-T. Luong, H. Pham, and C. D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, September. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning. NIPS Deep Learning Workshop, December. V. Mnih, K. Kavukcuoglu, D. Silver, A. A Rusu, J. Veness, M. G Bellemare, A. Graves, M. Riedmiller, A. K Fidjeland, G. Ostrovski, et al. 2015. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533. K. Narasimhan, T. Kulkarni, and R. Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1–11, September. R. Nogueira and K. Cho. 2016. Webnav: A new largescale task for natural language based sequential decision making. arXiv preprint arXiv:1602.02261. K. Scheffler and S. Young. 2002. Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proc. of the second International Conference on Human Language Technology Research, pages 12–19. I. Sutskever, O. Vinyals, and Q. V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. R. S Sutton and A. G Barto. 1998. Reinforcement learning: An introduction, volume 1. MIT press Cambridge. G. Tesauro. 1995. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58–68. C. JCH Watkins and P. Dayan. 1992. Q-learning. Machine learning, 8(3-4):279–292. S. Young, M. Gasic, B. Thomson, and J. D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. 1630
2016
153
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1631–1640, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Incorporating Copying Mechanism in Sequence-to-Sequence Learning Jiatao Gu† Zhengdong Lu‡ Hang Li‡ Victor O.K. Li† †Department of Electrical and Electronic Engineering, The University of Hong Kong {jiataogu, vli}@eee.hku.hk ‡Huawei Noah’s Ark Lab, Hong Kong {lu.zhengdong, hangli.hl}@huawei.com Abstract We address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in human language communication. For example, humans tend to repeat entity names or even long phrases in conversation. The challenge with regard to copying in Seq2Seq is that new machinery is needed to decide when to perform the operation. In this paper, we incorporate copying into neural networkbased Seq2Seq learning and propose a new model called COPYNET with encoderdecoder structure. COPYNET can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose subsequences in the input sequence and put them at proper places in the output sequence. Our empirical study on both synthetic data sets and real world data sets demonstrates the efficacy of COPYNET. For example, COPYNET can outperform regular RNN-based model with remarkable margins on text summarization tasks. 1 Introduction Recently, neural network-based sequence-tosequence learning (Seq2Seq) has achieved remarkable success in various natural language processing (NLP) tasks, including but not limited to Machine Translation (Cho et al., 2014; Bahdanau et al., 2014), Syntactic Parsing (Vinyals et al., 2015b), Text Summarization (Rush et al., 2015) and Dialogue Systems (Vinyals and Le, 2015). Seq2Seq is essentially an encoder-decoder model, in which the encoder first transform the input sequence to a certain representation which can then transform the representation into the output sequence. Adding the attention mechanism (Bahdanau et al., 2014) to Seq2Seq, first proposed for automatic alignment in machine translation, has led to significant improvement on the performance of various tasks (Shang et al., 2015; Rush et al., 2015). Different from the canonical encoderdecoder architecture, the attention-based Seq2Seq model revisits the input sequence in its raw form (array of word representations) and dynamically fetches the relevant piece of information based mostly on the feedback from the generation of the output sequence. In this paper, we explore another mechanism important to the human language communication, called the “copying mechanism”. Basically, it refers to the mechanism that locates a certain segment of the input sentence and puts the segment into the output sequence. For example, in the following two dialogue turns we observe different patterns in which some subsequences (colored blue) in the response (R) are copied from the input utterance (I): I: Hello Jack, my name is Chandralekha. R: Nice to meet you, Chandralekha. I: This new guy doesn’t perform exactly as we expected. R: What do you mean by "doesn’t perform exactly as we expected"? Both the canonical encoder-decoder and its variants with attention mechanism rely heavily on the representation of “meaning”, which might not be sufficiently inaccurate in cases in which the system needs to refer to sub-sequences of input like entity names or dates. In contrast, the 1631 copying mechanism is closer to the rote memorization in language processing of human being, deserving a different modeling strategy in neural network-based models. We argue that it will benefit many Seq2Seq tasks to have an elegant unified model that can accommodate both understanding and rote memorization. Towards this goal, we propose COPYNET, which is not only capable of the regular generation of words but also the operation of copying appropriate segments of the input sequence. Despite the seemingly “hard” operation of copying, COPYNET can be trained in an end-toend fashion. Our empirical study on both synthetic datasets and real world datasets demonstrates the efficacy of COPYNET. 2 Background: Neural Models for Sequence-to-sequence Learning Seq2Seq Learning can be expressed in a probabilistic view as maximizing the likelihood (or some other evaluation metrics (Shen et al., 2015)) of observing the output (target) sequence given an input (source) sequence. 2.1 RNN Encoder-Decoder RNN-based Encoder-Decoder is successfully applied to real world Seq2Seq tasks, first by Cho et al. (2014) and Sutskever et al. (2014), and then by (Vinyals and Le, 2015; Vinyals et al., 2015a). In the Encoder-Decoder framework, the source sequence X = [x1, ..., xTS] is converted into a fixed length vector c by the encoder RNN, i.e. ht = f(xt, ht−1); c = φ({h1, ..., hTS}) (1) where {ht} are the RNN states, c is the so-called context vector, f is the dynamics function, and φ summarizes the hidden states, e.g. choosing the last state hTS. In practice it is found that gated RNN alternatives such as LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) often perform much better than vanilla ones. The decoder RNN is to unfold the context vector c into the target sequence, through the following dynamics and prediction model: st = f(yt−1, st−1, c) p(yt|y<t, X) = g(yt−1, st, c) (2) where st is the RNN state at time t, yt is the predicted target symbol at t (through function g(·)) with y<t denoting the history {y1, ..., yt−1}. The prediction model is typically a classifier over the vocabulary with, say, 30,000 words. 2.2 The Attention Mechanism The attention mechanism was first introduced to Seq2Seq (Bahdanau et al., 2014) to release the burden of summarizing the entire source into a fixed-length vector as context. Instead, the attention uses a dynamically changing context ct in the decoding process. A natural option (or rather “soft attention”) is to represent ct as the weighted sum of the source hidden states, i.e. ct = TS X τ=1 αtτhτ; αtτ = eη(st−1,hτ) P τ ′ eη(st−1,hτ′) (3) where η is the function that shows the correspondence strength for attention, approximated usually with a multi-layer neural network (DNN). Note that in (Bahdanau et al., 2014) the source sentence is encoded with a Bi-directional RNN, making each hidden state hτ aware of the contextual information from both ends. 3 COPYNET From a cognitive perspective, the copying mechanism is related to rote memorization, requiring less understanding but ensuring high literal fidelity. From a modeling perspective, the copying operations are more rigid and symbolic, making it more difficult than soft attention mechanism to integrate into a fully differentiable neural model. In this section, we present COPYNET, a differentiable Seq2Seq model with “copying mechanism”, which can be trained in an end-to-end fashion with just gradient descent. 3.1 Model Overview As illustrated in Figure 1, COPYNET is still an encoder-decoder (in a slightly generalized sense). The source sequence is transformed by Encoder into representation, which is then read by Decoder to generate the target sequence. Encoder: Same as in (Bahdanau et al., 2014), a bi-directional RNN is used to transform the source sequence into a series of hidden states with equal length, with each hidden state ht corresponding to word xt. This new representation of the source, {h1, ..., hTS}, is considered to be a short-term memory (referred to as M in the remainder of the paper), which will later be accessed in multiple ways in generating the target sequence (decoding). 1632 hello , my name is Tony Jebara . Attentive Read hi , Tony Jebara <eos> hi , Tony h1 h2 h3 h4 h5 s1 s2 s3 s4 h6 h7 h8 “Tony” DNN Embedding for “Tony” Selective Read for “Tony” (a) Attention-based Encoder-Decoder (RNNSearch) (c) State Update s4 Source Vocabulary Softmax Prob(“Jebara”) = Prob(“Jebara”, g) + Prob(“Jebara”, c) … ... (b) Generate-Mode & Copy-Mode 𝜌 M M Figure 1: The overall diagram of COPYNET. For simplicity, we omit some links for prediction (see Sections 3.2 for more details). Decoder: An RNN that reads M and predicts the target sequence. It is similar with the canonical RNN-decoder in (Bahdanau et al., 2014), with however the following important differences • Prediction: COPYNET predicts words based on a mixed probabilistic model of two modes, namely the generate-mode and the copymode, where the latter picks words from the source sequence (see Section 3.2); • State Update: the predicted word at time t−1 is used in updating the state at t, but COPYNET uses not only its word-embedding but also its corresponding location-specific hidden state in M (if any) (see Section 3.3 for more details); • Reading M: in addition to the attentive read to M, COPYNET also has“selective read” to M, which leads to a powerful hybrid of content-based addressing and location-based addressing (see both Sections 3.3 and 3.4 for more discussion). 3.2 Prediction with Copying and Generation We assume a vocabulary V = {v1, ..., vN}, and use UNK for any out-of-vocabulary (OOV) word. In addition, we have another set of words X, for all the unique words in source sequence X = {x1, ..., xTS}. Since X may contain words not in V, copying sub-sequence in X enables COPYNET to output some OOV words. In a nutshell, the instance-specific vocabulary for source X is V ∪UNK ∪X. Given the decoder RNN state st at time t together with M, the probability of generating any target word yt, is given by the “mixture” of probabilities as follows p(yt|st, yt−1, ct, M) = p(yt, g|st, yt−1, ct, M) + p(yt, c|st, yt−1, ct, M) (4) where g stands for the generate-mode, and c the copy mode. The probability of the two modes are given respectively by p(yt, g|·)=        1 Z eψg(yt), yt ∈V 0, yt ∈X ∩¯V 1 Z eψg(UNK) yt ̸∈V ∪X (5) p(yt, c|·)= ( 1 Z P j:xj=yt eψc(xj), yt ∈X 0 otherwise (6) where ψg(·) and ψc(·) are score functions for generate-mode and copy-mode, respectively, and Z is the normalization term shared by the two modes, Z = P v∈V∪{UNK} eψg(v) + P x∈X eψc(x). Due to the shared normalization term, the two modes are basically competing through a softmax function (see Figure 1 for an illustration with example), rendering Eq.(4) different from the canonical definition of the mixture model (McLachlan and Basford, 1988). This is also pictorially illustrated in Figure 2. The score of each mode is calculated: 1633 unk 𝑋 𝑉 𝑋∩𝑉 ' ( exp 𝜓- 𝑣/ | 𝑣/ = 𝑦4 ' ( ∑ exp 𝜓6 𝑥8 | 𝑥8= 𝑦4 9: ' ( ∑ exp 𝜓6 𝑥8 + 9: exp 𝜓- 𝑣/ | 𝑥8 = 𝑦4,𝑣/ = 𝑦4 ' ( exp [𝜓- unk ] *Z is the normalization term. Figure 2: The illustration of the decoding probability p(yt|·) as a 4-class classifier. Generate-Mode: The same scoring function as in the generic RNN encoder-decoder (Bahdanau et al., 2014) is used, i.e. ψg(yt = vi) = v⊤ i Wost, vi ∈V ∪UNK (7) where Wo ∈R(N+1)×ds and vi is the one-hot indicator vector for vi. Copy-Mode: The score for “copying” the word xj is calculated as ψc(yt = xj) = σ  h⊤ j Wc  st, xj ∈X (8) where Wc ∈Rdh×ds, and σ is a non-linear activation function, considering that the non-linear transformation in Eq.( 8) can help project st and hj in the same semantic space. Empirically, we also found that using the tanh non-linearity worked better than linear transformation, and we used that for the following experiments. When calculating the copy-mode score, we use the hidden states {h1, ..., hTS} to “represent” each of the word in the source sequence {x1, ..., xTS} since the bidirectional RNN encodes not only the content, but also the location information into the hidden states in M. The location informaton is important for copying (see Section 3.4 for related discussion). Note that we sum the probabilities of all xj equal to yt in Eq. (6) considering that there may be multiple source symbols for decoding yt. Naturally we let p(yt, c|·) = 0 if yt does not appear in the source sequence, and set p(yt, g|·) = 0 when yt only appears in the source. 3.3 State Update COPYNET updates each decoding state st with the previous state st−1, the previous symbol yt−1 and the context vector ct following Eq. (2) for the generic attention-based Seq2Seq model. However, there is some minor changes in the yt−1−→st path for the copying mechanism. More specifically, yt−1 will be represented as [e(yt−1); ζ(yt−1)]⊤, where e(yt−1) is the word embedding associated with yt−1, while ζ(yt−1) is the weighted sum of hidden states in M corresponding to yt ζ(yt−1) = XTS τ=1 ρtτhτ ρtτ = ( 1 K p(xτ, c|st−1, M), xτ = yt−1 0 otherwise (9) where K is the normalization term which equals P τ ′:xτ′=yt−1 p(xτ ′, c|st−1, M), considering there may exist multiple positions with yt−1 in the source sequence. In practice, ρtτ is often concentrated on one location among multiple appearances, indicating the prediction is closely bounded to the location of words. In a sense ζ(yt−1) performs a type of read to M similar to the attentive read (resulting ct) with however higher precision. In the remainder of this paper, ζ(yt−1) will be referred to as selective read. ζ(yt−1) is specifically designed for the copy mode: with its pinpointing precision to the corresponding yt−1, it naturally bears the location of yt−1 in the source sequence encoded in the hidden state. As will be discussed more in Section 3.4, this particular design potentially helps copy-mode in covering a consecutive sub-sequence of words. If yt−1 is not in the source, we let ζ(yt−1) = 0. 3.4 Hybrid Addressing of M We hypothesize that COPYNET uses a hybrid strategy for fetching the content in M, which combines both content-based and location-based addressing. Both addressing strategies are coordinated by the decoder RNN in managing the attentive read and selective read, as well as determining when to enter/quit the copy-mode. Both the semantics of a word and its location in X will be encoded into the hidden states in M by a properly trained encoder RNN. Judging from our experiments, the attentive read of COPYNET is driven more by the semantics and language model, therefore capable of traveling more freely on M, even across a long distance. On the other hand, once COPYNET enters the copy-mode, the selective read of M is often guided by the location information. As the result, the selective read often takes rigid move and tends to cover consecutive words, including UNKs. Unlike the explicit design for hybrid addressing in Neural Turing Machine (Graves et al., 2014; Kurach et al., 2015), COPYNET is more subtle: it provides the archi1634 tecture that can facilitate some particular locationbased addressing and lets the model figure out the details from the training data for specific tasks. Location-based Addressing: With the location information in {hi}, the information flow ζ(yt−1) update −−−→st predict −−−→yt sel. read −−−−→ζ(yt) provides a simple way of “moving one step to the right” on X. More specifically, assuming the selective read ζ(yt−1) concentrates on the ℓth word in X, the state-update operation ζ(yt−1) update −−−→st acts as “location ←location+1”, making st favor the (ℓ+1)th word in X in the prediction st predict −−−→yt in copy-mode. This again leads to the selective read ˆht sel. read −−−−→ζ(yt) for the state update of the next round. Handling Out-of-Vocabulary Words Although it is hard to verify the exact addressing strategy as above directly, there is strong evidence from our empirical study. Most saliently, a properly trained COPYNET can copy a fairly long segment full of OOV words, despite the lack of semantic information in its M representation. This provides a natural way to extend the effective vocabulary to include all the words in the source. Although this change is small, it seems quite significant empirically in alleviating the OOV problem. Indeed, for many NLP applications (e.g., text summarization or spoken dialogue system), much of the OOV words on the target side, for example the proper nouns, are essentially the replicates of those on the source side. 4 Learning Although the copying mechanism uses the “hard” operation to copy from the source and choose to paste them or generate symbols from the vocabulary, COPYNET is fully differentiable and can be optimized in an end-to-end fashion using backpropagation. Given the batches of the source and target sequence {X}N and {Y }N, the objectives are to minimize the negative log-likelihood: L = −1 N N X k=1 T X t=1 log h p(y(k) t |y(k) <t , X(k)) i , (10) where we use superscripts to index the instances. Since the probabilistic model for observing any target word is a mixture of generate-mode and copy-mode, there is no need for any additional labels for modes. The network can learn to coordinate the two modes from data. More specifically, if one particular word y(k) t can be found in the source sequence, the copy-mode will contribute to the mixture model, and the gradient will more or less encourage the copy-mode; otherwise, the copy-mode is discouraged due to the competition from the shared normalization term Z. In practice, in most cases one mode dominates. 5 Experiments We report our empirical study of COPYNET on the following three tasks with different characteristics 1. A synthetic dataset on with simple patterns; 2. A real-world task on text summarization; 3. A dataset for simple single-turn dialogues. 5.1 Synthetic Dataset Dataset: We first randomly generate transformation rules with 5∼20 symbols and variables x & y, e.g. a b x c d y e f −→ g h x m, with {a b c d e f g h m} being regular symbols from a vocabulary of size 1,000. As shown in the table below, each rule can further produce a number of instances by replacing the variables with randomly generated subsequences (1∼15 symbols) from the same vocabulary. We create five types of rules, including “x →∅”. The task is to learn to do the Seq2Seq transformation from the training instances. This dataset is designed to study the behavior of COPYNET on handling simple and rigid patterns. Since the strings to repeat are random, they can also be viewed as some extreme cases of rote memorization. Rule-type Examples (e.g. x = i h k, y = j c) x →∅ a b c d x e f →c d g x →x a b c d x e f →c d x g x →x x a b c d x e f →x d x g x y →x a b y d x e f →x d i g x y →x y a b y d x e f →x d y g Experimental Setting: We select 200 artificial rules from the dataset, and for each rule 200 instances are generated, which will be split into training (50%) and testing (50%). We compare the accuracy of COPYNET and the RNN EncoderDecoder with (i.e. RNNsearch) or without attention (denoted as Enc-Dec). For a fair comparison, we use bi-directional GRU for encoder and another GRU for decoder for all Seq2Seq models, with hidden layer size = 300 and word embedding dimension = 150. We use bin size = 10 in beam search for testing. The prediction is considered 1635 Rule-type x x x xy xy →∅ →x →xx →x →xy Enc-Dec 100 3.3 1.5 2.9 0.0 RNNSearch 99.0 69.4 22.3 40.7 2.6 COPYNET 97.3 93.7 98.3 68.2 77.5 Table 1: The test accuracy (%) on synthetic data. correct only when the generated sequence is exactly the same as the given one. It is clear from Table 1 that COPYNET significantly outperforms the other two on all rule-types except “x →∅”, indicating that COPYNET can effectively learn the patterns with variables and accurately replicate rather long subsequence of symbols at the proper places.This is hard to Enc-Dec due to the difficulty of representing a long sequence with very high fidelity. This difficulty can be alleviated with the attention mechanism. However attention alone seems inadequate for handling the case where strict replication is needed. A closer look (see Figure 3 for example) reveals that the decoder is dominated by copy-mode when moving into the subsequence to replicate, and switch to generate-mode after leaving this area, showing COPYNET can achieve a rather precise coordination of the two modes. Pattern: 705 502 X 504 339 270 584 556 → 510 771 581 557 022 230 X 115 102 172 862 X 950 * Symbols are represented by their indices from 000 to 999 ** Dark color represents large value. 510 771 581 557 263 022 230 970 991 575 325 688 115 102 172 862 970 991 575 325 688 950 705 502 970 991 575 325 688 504 339 270 584 556 The Source Sequence <0> 510 771 581 557 263 022 230 970 991 575 325 688 115 102 172 862 970 991 575 325 688 Input: The Target Sequence Predict: Figure 3: Example output of COPYNET on the synthetic dataset. The heatmap represents the activations of the copy-mode over the input sequence (left) during the decoding process (bottom). 5.2 Text Summarization Automatic text summarization aims to find a condensed representation which can capture the core meaning of the original document. It has been recently formulated as a Seq2Seq learning problem in (Rush et al., 2015; Hu et al., 2015), which essentially gives abstractive summarization since the summary is generated based on a representation of the document. In contrast, extractive summarization extracts sentences or phrases from the original text to fuse them into the summaries, therefore making better use of the overall structure of the original document. In a sense, COPYNET for summarization lies somewhere between two categories, since part of output summary is actually extracted from the document (via the copying mechanism), which are fused together possibly with the words from the generate-mode. Dataset: We evaluate our model on the recently published LCSTS dataset (Hu et al., 2015), a large scale dataset for short text summarization. The dataset is collected from the news medias on Sina Weibo1 including pairs of (short news, summary) in Chinese. Shown in Table 2, PART II and III are manually rated for their quality from 1 to 5. Following the setting of (Hu et al., 2015) we use Part I as the training set and and the subset of Part III scored from 3 to 5 as the testing set. Dataset PART I PART II PART III no. of pairs 2,400,591 10,666 1106 no. of score ≥3 8685 725 Table 2: Some statistics of the LCSTS dataset. Experimental Setting: We try COPYNET that is based on character (+C) and word (+W). For the word-based variant the word-segmentation is obtained with jieba2. We set the vocabulary size to 3,000 (+C) and 10,000 (+W) respectively, which are much smaller than those for models in (Hu et al., 2015). For both variants we set the embedding dimension to 350 and the size of hidden layers to 500. Following (Hu et al., 2015), we evaluate the test performance with the commonly used ROUGE-1, ROUGE-2 and ROUGE-L (Lin, 2004), and compare it against the two models in (Hu et al., 2015), which are essentially canonical Encoder-Decoder and its variant with attention. Models ROUGE scores on LCSTS (%) R-1 R-2 R-L RNN +C 21.5 8.9 18.6 (Hu et al., 2015) +W 17.7 8.5 15.8 RNN context +C 29.9 17.4 27.2 (Hu et al., 2015) +W 26.8 16.1 24.1 COPYNET +C 34.4 21.6 31.3 +W 35.0 22.3 32.0 Table 3: Testing performance of LCSTS, where “RNN” is canonical Enc-Dec, and “RNN context” its attentive variant. It is clear from Table 3 that COPYNET beats the competitor models with big margin. Hu et al. (2015) reports that the performance of a word-based model is inferior to a character-based 1www.sina.com 2https://pypi.python.org/pypi/jieba 1636 Input(1): 今天上午9 点半,复旦投毒案将在上海二中院公开审理。被害学生黄洋的亲属已从四川抵达上海,其父称待刑事部分结束后,再提民事赔偿,黄洋92 岁的奶奶依然 不知情。今年4 月,在复旦上海医学院读研究生的黄洋疑遭室友林森浩投毒,不幸身亡。新民网 Today 9:30, the Fudan poisoning case will be will on public trial at the Shanghai Second Intermediate Court. The relatives of the murdered student Huang Yang has arrived at Shanghai from Sichuan. His father said that they will start the lawsuit for civil compensation after the criminal section. HuangYang 92-year-old grandmother is still unaware of his death. In April, a graduate student at Fudan University Shanghai Medical College, Huang Yang is allegedly poisoned and killed by his roommate Lin Senhao. Reported by Xinmin ______________________________________________________________ Golden: 林森浩投毒案今日开审92 岁奶奶尚不知情 the case of Lin Senhao poisoning is on trial today, his 92-year-old grandmother is still unaware of this RNN context: 复旦投毒案:黄洋疑遭室友投毒凶手已从四川飞往上海,父亲命案另有4人被通知家属不治? CopyNet: 复旦投毒案今在沪上公开审理 the Fudan poisoning case is on public trial today in Shanghai Input(2): 华谊兄弟(300027 )在昨日收盘后发布公告称,公司拟以自有资金3.978 亿元收购浙江永乐影视股份有限公司若干股东持有的永乐影视51 % 的股权。对于此项收购, 华谊兄弟董秘胡明昨日表示:“ 和永乐影视的合并是对华谊兄弟电视剧业务的一个加强。 Huayi Brothers (300027) announced that the company intends to buy with its own fund 397.8 million 51% of Zhejiang Yongle Film LTD's stake owned by a number of shareholders of Yongle Film LTD. For this acquisition, the secretary of the board, Hu Ming, said yesterday: "the merging with Yongle Film is to strengthen Huayi Brothers on TV business". ______________________________________________________________ Golden: 华谊兄弟拟收购永乐影视51%股权 Huayi Brothers intends to acquire 51% stake of Zhejiang Yongle Film RNN context: 华谊兄弟收购永乐影视51%股权:与永乐影视合并为“和唐”影视合并的“UNK”和“UNK”的区别? CopyNet: 华谊兄弟拟3.978 亿收购永乐影视董秘称加强电视剧业务 Huayi Brothers is intended to 397.8 million acquisition of Yongle Film secretaries called to strengthen the TV business Input(3): 工厂,大门紧锁,约20 名工人散坐在树荫下。“ 我们就是普通工人,在这里等工资。” 其中一人说道。7 月4 日上午,记者抵达深圳龙华区清湖路上的深圳愿景 光电子有限公司。正如传言一般,愿景光电子倒闭了,大股东邢毅不知所踪。 The door of factory is locked. About 20 workers are scattered to sit under the shade. “We are ordinary workers, waiting for our salary” one of them said. In the morning of July 4th, reporters arrived at Yuanjing Photoelectron Corporation located at Qinghu Road, Longhua District, Shenzhen. Just as the rumor, Yuanjing Photoelectron Corporation is closed down and the big shareholder Xing Yi is missing. ______________________________________________________________ Golden: 深圳亿元级LED 企业倒闭烈日下工人苦等老板 Hundred-million CNY worth LED enterprise is closed down and workers wait for the boss under the scorching sun RNN context: 深圳“<UNK>”:深圳<UNK><UNK>,<UNK>,<UNK>,<UNK> CopyNet: 愿景光电子倒闭20 名工人散坐在树荫下 Yuanjing Photoelectron Corporation is closed down, 20 workers are scattered to sit under the shade Input(4): 截至2012 年10 月底,全国累计报告艾滋病病毒感染者和病人492191 例。卫生部称,性传播已成为艾滋病的主要传播途径。至2011 年9 月,艾滋病感染者和病人数累 计报告数排在前6 位的省份依次为云南、广西、河南、四川、新疆和广东,占全国的75.8 % 。。 At the end of October 2012, the national total of reported HIV infected people and AIDS patients is 492,191 cases. The Health Ministry saids exual transmission has become the main route of transmission of AIDS. To September 2011, the six provinces with the most reported HIV infected people and AIDS patients were Yunnan, Guangxi, Henan,Sichuan, Xinjiang and Guangdong, accounting for 75.8% of the country. ______________________________________________________________ Golden: 卫生部:性传播成艾滋病主要传播途径 Ministry of Health: Sexually transmission became the main route of transmission of AIDS RNN context: 全国累计报告艾滋病患者和病人<UNK>例艾滋病患者占全国<UNK>%,性传播成艾滋病高发人群? CopyNet: 卫生部:性传播已成为艾滋病主要传播途径 Ministry of Health: Sexually transmission has become the main route of transmission of AIDS Input(5): 中国反垄断调查风暴继续席卷汽车行业,继德国车企奥迪和美国车企克莱斯勒“ 沦陷” 之后,又有12 家日本汽车企业卷入漩涡。记者从业内人士获悉,丰田旗下的 雷克萨斯近期曾被发改委约谈。 Chinese antitrust investigation continues to sweep the automotive industry. After Germany Audi car and the US Chrysler "fell", there are 12 Japanese car companies involved in the whirlpool. Reporters learned from the insiders that Toyota's Lexus has been asked to report to the Development and Reform Commission recently. ______________________________________________________________ Golden: 发改委公布汽车反垄断进程:丰田雷克萨斯近期被约谈 the investigation by Development and Reform Commission: Toyota's Lexus has been asked to report RNN context: 丰田雷克萨斯遭发改委约谈:曾被约谈丰田旗下的雷克萨斯遭发改委约谈负人被约谈 CopyNet: 中国反垄断继续席卷汽车行业12 家日本汽车企业被发改委约谈 Chinese antitrust investigation continues to sweep the automotive industry. 12 Japanese car companies are asked to report to he Development and Reform Commission Input(6): 镁离子电池相比锂电池能量密度提升了近一倍,这意味着使用了镁电池的电动车,纯电续航也将有质的提升。但目前由于电解质等技术壁垒,要大规模量产并取代锂电池还为时过早。 The energy density of Magnesium ion batteries almost doubles that of lithium battery, which means that for the electric vehicles using of magnesium batteries will last longer even at pure electric power. But currently due to the technical barriers to the electrolyte, it is still too early for the mass production of it and replacing lithium batteries.. ______________________________________________________________ Golden: 锂电池或将被淘汰能量密度更高的镁电池亦大势所趋 Lithium batteries will be phased out, magnesium battery with energy density higher will be the future trend RNN context: <UNK>、<UNK>、<UNK>、<UNK>、<UNK>、<UNK>、<UNK>、<UNK>、<UNK>、<UNK>、<UNK>、<UNK>电池了 CopyNet: 镁离子电池问世:大规模量产取代锂电池 Magnesium ion battery is developed : mass production of it will replace lithium batteries Input(7): 1 . 掌握技巧融会贯通;2 . 学会融资;3 . 懂法律;4 . 保持自信;5 . 测试+ 尝试;6 . 了解客户的需求;7 . 预测+ 衡量+ 确保;8 . 做好与各种小bug 做斗争的心态;9 . 发现机遇保持创业激情。 1. master the skills; 2 Learn to finance ; 3. understand the law; 4. Be confident; 5. test+ trial; 6. understand the need of customers; 7 forecast + measure + ensure; 8. mentally prepared to fight all kinds of small bugs ; 9 discover opportunities and keep the passion of start-up. ______________________________________________________________ Golden: 初次创业者必知的10 个技巧 The 10 tips for the first time start-ups RNN context: 6个方法让你创业的6个<UNK>与<UNK>,你怎么看懂你的创业故事吗?(6家) CopyNet: 创业成功的9 个技巧 The 9 tips for success in start-up Input(8): 9 月3 日,总部位于日内瓦的世界经济论坛发布了《2014 - 2015 年全球竞争力报告》,瑞士连续六年位居榜首,成为全球最具竞争力的国家,新加坡和美国分列第二 位和第三位。中国排名第28 位,在金砖国家中排名最高。 On September 3, the Geneva based World Economic Forum released “ The Global Competitiveness Report 2014-2015”. Switzerland topped the list for six consecutive years , becoming the world‘s most competitive country. Singapore and the United States are in the second and third place respectively. China is in the 28th place, ranking highest among the BRIC countries. ______________________________________________________________ Golden: 全球竞争力排行榜中国居28 位居金砖国家首位 The Global competitiveness ranking list, China is in the 28th place, the highest among BRIC countries. RNN context: 2014-2015年全球竞争力报告:瑞士连续6年居榜首中国居28位(首/3———访榜首)中国排名第28位 CopyNet: 2014 - 2015 年全球竞争力报告:瑞士居首中国第28 2014--2015 Global Competitiveness Report: Switzerland topped and China the 28th Figure 4: Examples of COPYNET on LCSTS compared with RNN context. Word segmentation is applied on the input, where OOV words are underlined. The highlighted words (with different colors) are those words with copy-mode probability higher than the generate-mode. We also provide literal English translation for the document, the golden, and COPYNET, while omitting that for RNN context since the language is broken. 1637 one. One possible explanation is that a wordbased model, even with a much larger vocabulary (50,000 words in Hu et al. (2015)), still has a large proportion of OOVs due to the large number of entity names in the summary data and the mistakes in word segmentation. COPYNET, with its ability to handle the OOV words with the copying mechanism, performs however slightly better with the word-based variant. 5.2.1 Case Study As shown in Figure 4, we make the following interesting observations about the summary from COPYNET: 1) most words are from copy-mode, but the summary is usually still fluent; 2) COPYNET tends to cover consecutive words in the original document, but it often puts together segments far away from each other, indicating a sophisticated coordination of content-based addressing and location-based addressing; 3) COPYNET handles OOV words really well: it can generate acceptable summary for document with many OOVs, and even the summary itself often contains many OOV words. In contrast, the canonical RNN-based approaches often fail in such cases. It is quite intriguing that COPYNET can often find important parts of the document, a behavior with the characteristics of extractive summarization, while it often generate words to “connect” those words, showing its aspect of abstractive summarization. 5.3 Single-turn Dialogue In this experiment we follow the work on neural dialogue model proposed in (Shang et al., 2015; Vinyals and Le, 2015; Sordoni et al., 2015), and test COPYNET on single-turn dialogue. Basically, the neural model learns to generate a response to user’s input, from the given (input, response) pairs as training instances. Dataset: We build a simple dialogue dataset based on the following three instructions: 1. Dialogue instances are collected from Baidu Tieba3 with some coverage of conversations of real life e.g., greeting and sports, etc. 2. Patterns with slots like hi, my name is x →hi, x are mined from the set, with possibly multiple responding patterns to one input. 3http://tieba.baidu.com 3. Similar with the synthetic dataset, we enlarge the dataset by filling the slots with suitable subsequence (e.g. name entities, dates, etc.) To make the dataset close to the real conversations, we also maintain a certain proportion of instances with the response that 1) do not contain entities or 2) contain entities not in the input. Experimental Setting: We create two datasets: DS-I and DS-II with slot filling on 173 collected patterns. The main difference between the two datasets is that the filled substrings for training and testing in DS-II have no overlaps, while in DS-I they are sampled from the same pool. For each dataset we use 6,500 instances for training and 1,500 for testing. We compare COPYNET with canonical RNNSearch, both character-based, with the same model configuration in Section 5.1. DS-I (%) DS-II (%) Models Top1 Top10 Top1 Top10 RNNSearch 44.1 57.7 13.5 15.9 COPYNET 61.2 71.0 50.5 64.8 Table 4: The decoding accuracy on the two testing sets. Decoding is admitted success only when the answer is found exactly in the Top-K outputs. We compare COPYNET and RNNSearch on DS-I and DS-II in terms of top-1 and top-10 accuracy (shown in Table 4), estimating respectively the chance of the top-1 or one of top-10 (from beam search) matching the golden. Since there are often many good responses to an input, top10 accuracy appears to be closer to the real world setting. As shown in Table 4, COPYNET significantly outperforms RNNsearch, especially on DS-II. It suggests that introducing the copying mechanism helps the dialogue system master the patterns in dialogue and correctly identify the correct parts of input, often proper nouns, to replicate in the response. Since the filled substrings have no overlaps in DS-II, the performance of RNNSearch drops significantly as it cannot handle words unseen in training data. In contrast, the performance of COPYNET only drops slightly as it has learned to fill the slots with the copying mechanism and relies less on the representation of the words. 5.3.1 Case Study As indicated by the examples in Figure 5, COPYNET accurately replicates the critical segments from the input with the copy-mode, and generates 1638 Input(1): 我出生在上海。 /I was born in Shanghai Golden: 真巧,我也来自上海,老乡啊! /What a coincidence, I also come from Shanghai, we are fellow townsman! RNNSearch: 真巧,我也来自海南,老乡啊! /What a coincidence, I also come from Hainan, we are fellow townsman! CopyNet: 真巧,我也来自上海,老乡啊! /What a coincidence, I also come from Shanghai, we are fellow townsman! Input(2): 今天去爬黄山吗? /Shall we climbing Huangshan? Golden: 去爬泰山吧,黄山没意思! /Climb Taishan. Huangshan is boring! RNNSearch: 去爬长白山吧,衡山没意思! /Climb Changbaishan, Hengshan is boring! CopyNet: 去爬嵩山吧,黄山没意思! /Climb Songshan, Huangshan is boring! Input(3): 你喜欢喝雪碧还是喜欢可乐? /Would you like to drink Sprite or Coke? Golden: 都不喜欢,谢谢! /Not at all, thanks! RNNSearch: 我喜欢喝蜂蜜! /I like to drink honey! CopyNet: 我喜欢喝可乐! /I like to drink coke! Input(4): 我大青岛四季宜人 /Our great Qingdao has a good weather! Golden: 壮我大青岛~~~ /Our Qingdao is great~~~ RNNSearch: 壮我大哈尔滨~~~ /Our Harbin is great~~~ CopyNet: 壮我大青岛~~~ /Our Qingdao is great~~~ Input(5): 女儿会游蝶泳 /Daughter can swim butterfly? Golden: 我连游泳都不会T^T /I cannot swim at all T^T RNNSearch: 我连游泳都不会T^T /I cannot swim at all T^T CopyNet: 我连游泳都不会T^T /I cannot swim at all T^T Input(6): 苏有朋是谁? /Who is Su You Peng? Golden: 苏有朋是一个男明星。 /Su You Peng is a male star. CopyNet: 苏有是一个男明星。 /Su You is a male star RNNSearch: 吴亦凡是一个男明星。 /Wu Yifan is a male star. Figure 5: Examples from the testing set of DS-II shown as the input text and golden, with the outputs of RNNSearch and CopyNet. Words in red rectangles are unseen in the training set. The highlighted words (with different colors) are those words with copy-mode probability higher than the generate-mode. Green cirles (meaning correct) and red cross (meaning incorrect) are given based on human judgment on whether the response is appropriate. the rest of the answers smoothly by the generatemode. Note that in (2) and (3), the decoding sequence is not exactly the same with the standard one, yet still correct regarding to their meanings. In contrast, although RNNSearch usually generates answers in the right formats, it fails to catch the critical entities in all three cases because of the difficulty brought by the unseen words. 6 Related Work Our work is partially inspired by the recent work of Pointer Networks (Vinyals et al., 2015a), in which a pointer mechanism (quite similar with the proposed copying mechanism) is used to predict the output sequence directly from the input. In addition to the difference with ours in application, (Vinyals et al., 2015a) cannot predict outside of the set of input sequence, while COPYNET can naturally combine generating and copying. COPYNET is also related to the effort to solve the OOV problem in neural machine translation. Luong et al. (2015) introduced a heuristics to postprocess the translated sentence using annotations on the source sentence. In contrast COPYNET addresses the OOV problem in a more systemic way with an end-to-end model. However, as COPYNET copies the exact source words as the output, it cannot be directly applied to machine translation. However, such copying mechanism can be naturally extended to any types of references except for the input sequence, which will help in applications with heterogeneous source and target sequences such as machine translation. The copying mechanism can also be viewed as carrying information over to the next stage without any nonlinear transformation. Similar ideas are proposed for training very deep neural networks in (Srivastava et al., 2015; He et al., 2015) for classification tasks, where shortcuts are built between layers for the direct carrying of information. Recently, we noticed some parallel efforts towards modeling mechanisms similar to or related to copying. Cheng and Lapata (2016) devised a neural summarization model with the ability to extract words/sentences from the source. Gulcehre et al. (2016) proposed a pointing method to handle the OOV words for summarization and MT. In contrast, COPYNET is more general, and not limited to a specific task or OOV words. Moreover, the softmaxCOPYNET is more flexible than gating in the related work in handling the mixture of two modes, due to its ability to adequately model the content of copied segment. 7 Conclusion and Future Work We proposed COPYNET to incorporate copying into the sequence-to-sequence learning framework. For future work, we will extend this idea to the task where the source and target are in heterogeneous types, for example, machine translation. Acknowledgments This work is supported in part by the China National 973 Project 2014CB340301. 1639 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lcsts: a large scale chinese short text summarization dataset. arXiv preprint arXiv:1506.05865. Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. 2015. Neural random-access machines. arXiv preprint arXiv:1511.06392. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Stan Szpakowicz Marie-Francine Moens, editor, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain, July. Association for Computational Linguistics. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11–19, Beijing, China, July. Association for Computational Linguistics. Geoffrey J McLachlan and Kaye E Basford. 1988. Mixture models. inference and applications to clustering. Statistics: Textbooks and Monographs, New York: Dekker, 1988, 1. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. CoRR, abs/1512.02433. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015a. Pointer networks. In Advances in Neural Information Processing Systems, pages 2674–2682. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015b. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755–2763. 1640
2016
154
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1641–1650, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Cross-domain Text Classification with Multiple Domains and Disparate Label Sets Himanshu S. Bhatt, Manjira Sinha and Shourya Roy Xerox Research Centre India, Bangalore, INDIA {Firstname.Lastname}@Xerox.Com Abstract Advances in transfer learning have let go the limitations of traditional supervised machine learning algorithms for being dependent on annotated training data for training new models for every new domain. However, several applications encounter scenarios where models need to transfer/adapt across domains when the label sets vary both in terms of count of labels as well as their connotations. This paper presents first-of-its-kind transfer learning algorithm for cross-domain classification with multiple source domains and disparate label sets. It starts with identifying transferable knowledge from across multiple domains that can be useful for learning the target domain task. This knowledge in the form of selective labeled instances from different domains is congregated to form an auxiliary training set which is used for learning the target domain task. Experimental results validate the efficacy of the proposed algorithm against strong baselines on a real world social media and the 20 Newsgroups datasets. 1 Introduction A fundamental assumption in supervised statistical learning is that training and test data are independently and identically distributed (i.i.d.) samples drawn from a distribution. Otherwise, good performance on test data cannot be guaranteed even if the training error is low. On the other hand, transfer learning techniques allow domains, tasks, and distributions used in training and testing to be different, but related. It works in contrast to traditional supervised techniques on the principle of transferring learned knowledge across domains. Pan and Yang, in their survey paper (2010), deFigure 1: Cross-domain (a) sentiment classification and (b) subject classification. Illustrates (a) invariant and (b) disparate label sets. scribed different transfer learning settings depending on if domains and tasks vary as well as labeled data is available in one/more/none of the domains. In this paper, we propose a generic solution for multi-source transfer learning where domains and tasks are different and no labeled data is available in the target domain. This is a relatively less chartered territory and arguably a more generic setting of transfer learning. Motivating example: Consider a social media consulting company helping brands to monitor their social media channels. Two problems typically of interest are: (i) sentiment classification (is a post positive/negative/neutral?) and (ii) subject classification (what was the subject of a post?). While sentiment classification attempts to classify a post based on its polarity, subject classification is towards identifying the subject (or topic) of the post, as illustrated in Figure 1. The company has been using standard classification techniques from an off-the-shelf machine learning toolbox. While machine learning toolkit helps them to create and apply statistical models efficiently, the same model can not be applied on a new collection due to variations in data distributions across collections1. It requires a few hundreds of manually labeled posts for every task on every collec1A collection comprises comments/posts pertaining to a particular client/product/services. Domain and collection are used interchangeably. 1641 tion. As social media are extremely high velocity and low retention channels, human labeling efforts act like that proverbial narrow bottleneck. Need of the hour was to reduce, if not eliminate, the human-intensive labeling stage while continue to use machine learning models for new collections. Several transfer learning techniques exist in the literature which can reduce labeling efforts required for performing tasks in new collections. Tasks such as sentiment classification, named entity recognition (NER), part of speech (POS) tagging that have invariant label sets across domains, have shown to be greatly benefited from these works. On the other hand, tasks like subject classification that have disparate label sets across domains have not been able to gain at pace with the advances in transfer learning. Towards that we formulate the problem of Cross-domain classification with disparate label sets as learning an accurate model for the new unlabeled target domain given labeled data from multiple source domains where all domains have (possibly) different label sets. Our contributions: To the best of our knowledge, this is the first work to explore the problem of cross-domain text classification with multiple source domains and disparate label sets. The other contributions of this work includes a simple yet efficient algorithm which starts with identifying transferable knowledge from across multiple source domains useful for learning the target domain task. Specifically, it identifies relevant class-labels from the source domains such that the instances in those classes can induce classseparability in the target domain. This transferable knowledge is accumulated as an auxiliary training set for an algorithm to learn the target domain classification task followed by suitable transformation of the auxiliary training instances. Organization of the paper is as follows: Section 2 presents the preliminaries and notation, Section 3 summarizes the related work. Section 4 and 5 present the proposed algorithm and experimental results respectively. Section 6 concludes the paper. 2 Preliminaries and Notations A domain D = {X, P(X)} is characterized by two components: a feature space X and a marginal probability distribution P(X), where X = {x1, x2, ...xn} ∈X. A task T = {Y, f(·)} also consists of two components: a label space Y and an objective predictive function f(·). In our settings for cross-domain classification with disparate label sets, we assume M source domains, denoted as DSi, where i = {1, 2, ..M}. Each source domain has different marginal distribution i.e. P(XSi) ̸= P(XSj) and different label space i.e. YSi ̸= YSj, ∀i, j ∈M. The label space across domains vary both in terms of count of class-labels as well as their connotations; however, a finite set of labeled instances are available from each source domain. The target domain (DT ) consists of a finite set of unlabeled instances, denoted as ti where i = {1, .., N}. Let YT be the target domain label space with K class-labels. We assume that the number of classes in the target domain i.e. K is known (analogous to clustering where the number of clusters is given). 3 Related Work Table 1 summarizes different settings of transfer learning (Pan and Yang, 2010) and how this work differentiates from the existing literature2. The first scenario represents the ideal settings of traditional machine learning (Mitchell, 1997) where a model is trained on a fraction of labeled data and performs well for the same task on the future unseen instances from the same domain. The second scenario where the domains vary while the tasks remain the same is referred to as transductive transfer learning. This is the most extensively studied settings in the transfer learning literature and can be broadly categorized as single and multi-source adaptation. Single source adaptation (Chen et al., 2009; Ando and Zhang, 2005; Daum´e III, 2009) primarily aims at minimizing the divergence between the source and target domains either at instance or feature levels. The general idea being identifying a suitable low dimensional space where transformed source and target domains data follow similar distributions and hence, a standard supervised learning algorithm can be trained (Daum´e III, 2009; Jiang and Zhai, 2007; Pan et al., 2011; Blitzer et al., 2007; Pan et al., 2010; Dai et al., 2007; Bhatt et al., 2015). While several existing single source adaptation techniques can be extended to multi-source adaptation, the literature in multi-source adaptation can be broadly categorized as: 1) feature representation approaches (Chattopadhyay et al., 2012; Sun et al., 2011; Duan et al., 2009; Duan et al., 2012; 2This is not the complete view of the transfer learning literature; however, covers relevant work that helps motivate/differentiate the novel features of this paper. 1642 Table 1: Summarizing the related work and differentiating the novel features of the proposed algorithm. Scenario Settings Nature of Data Learning Paradigm Main Concepts Our Differentiation DS = DT , TS = TT Traditional Machine learning Labelled data in source domain(s) and unlabeled data in target domain Source and target domains are exactly the same Learn models on training set and test on future unseen data Allows tasks across domains to be different;a more general setting Transductive Labelled data in source domain(s) and Single source domain adaptation Learning common shared representation; instance weighing, parameter transfer Exploits multiple sources each with disparate label sets. DS ̸= DT , TS = TT Transfer Learning unlabeled data from the target domain P(XS) ̸= P(XT ) Multi-source adaptation Classifier combination; efficient combination of information from multiple sources; Feature representation Intelligent selection of transferable knowledge from multiple sources for adaptation. Inductive Unlabeled data in source domain(s) and labeled data in target domain Self-taught learning Extracts higher level representations from unlabeled auxiliary data to learn instance-to-label mapping with labeled target instances Learns instance-to-label mapping in the unlabeled target domain using multiple labeled source domains having different data distributions and label spaces. No conditions on DS & DT , but, TS ̸= TT Transfer Learning Labeled data is available in all domains Multi-task learning Simultaneously learns multiple tasks within (or across) domain(s) by exploiting the common feature subspace shared across the tasks Learns the optimal class distribution in an unlabeled target domain by minimizing the differences with multiple labeled source domains. DS ̸= DT , TS ̸= TT Kim et al. (2015) Labeled data in source and target domains Transfer learning with disparate label set Disparate fine grained label sets across domains, however, same coarse grained labels set can be invoked across domains No coarse-to-fine label mapping due to heterogeneity of label sets, Assumes no labelled data in target domain. Bollegala et al., 2013; Crammer et al., 2008; Mansour et al., 2009; Ben-David et al., 2010; Bhatt et al., 2016) and 2) combining pre-trained classifiers (Schweikert and Widmer, 2008; Sun and Shi, 2013; Yang et al., 2007; Xu and Sun, 2012; Sun et al., 2013). Our work differentiates in intelligently exploiting selective transferable knowledge from multiple sources unlike existing approaches where multiple sources contribute in a brute-force manner. The third scenario where the tasks differ irrespective of the relationship among domains is referred to as inductive transfer learning. Self-taught learning (Raina et al., 2007) and multi-task (Jiang, 2009; Maurer et al., 2012; Xu et al., 2015; Kumar and Daume III, 2012) learning are the two main learning paradigms in this scenario and Table 1 differentiates our work from these. This work closely relates to the fourth scenario where we allow domains to vary in the marginal probability distributions and the tasks to vary due to different label spaces3. The closest prior work by Kim et al. (2015) address a sequential labeling problem in NLU where the fine grained label sets across domains differ. However, they assume that there exists a bijective mapping between the coarse and fine-grained label sets across domains. They learn this mapping using labeled instances from the target domain to reduce the problem to a standard domain adaptation problem (Scenario 2). 3This work do not consider scenario when domains vary in feature spaces and tasks vary in the objective predictive functions. Figure 2: Illustrates different stages of the proposed algorithm. However, this paper caters to multiple source domains with disparate label sets without assuming availability of any labeled data from the target domain or fine-to-coarse label mappings across domains. 4 Cross-domain Classification with Disparate Label Set The underlying philosophy of the proposed algorithm is to learn the target domain task by using the available information from multiple source domains. To accomplish this, we have developed an algorithm to identify and extract partial transferable knowledge from multiple sources. This knowledge is then suitably transformed to induce classes in the target domain using the class separation from the source domains. Different stages of the proposed algorithm, as shown in Figure 2, are 1643 elaborated in the next sections. 4.1 Exploiting Multiple Domains If we had the mappings between the source and target domain label sets, we could have leveraged existing transfer learning approaches. However, heterogeneity of label sets across domains and the unlabeled data from the target domain exacerbate the problem. Our objective is to leverage the knowledge from multiple source domains to induce class-separability in the target domain. Inducing class-separability refers to segregating the target domain into K classes using labeled instances from selective K source domain classes. Towards this, the proposed algorithm divides each source domain into clusters/groups based on the class-labels such that instances with the same label are grouped in one cluster. All source domains are divided into Q clusters where Q = PM m=1||Ym|| represents the count of class-labels across all sources. ||Ym|| being the count of classlabels in the mth source domain. Cq denotes the qth cluster and µq denotes its centroid computed as the average of all the members in the cluster. We assert that the target domain instances that have high similarity to a particular source domain cluster can be grouped together. Given N target domain instances and Q source domain clusters, a matrix R (dimension N × Q) is computed based on the similarity of the target domain instances with the source clusters. The ith row of the matrix captures the similarity of the ith target domain instance (ti) with all the source domain clusters. It captures how different source domain class-labels are associated with the target domain instances and hence, can induce class-separability in the target domain. 4.2 Extracting Transferable Knowledge The similarity matrix R associates target domain instances to the source domain clusters in proportion to their similarity. However, the objective is to select the optimal K source domain clusters that fit the maximum number of target domain instances. This problem is similar to the well-known combinatorial optimization problem of Maximum Coverage (Vazirani, 2003) where given a collection of P sets, we need to select A sets (A < P) such that the size of the union of the selected sets is maximized. In this paper, we are given Q source domain clusters and need to select K clusters such that the corresponding number of associated target domain instances is maximized. As the Maximum Coverage problem is NP-hard, we implement a greedy algorithm for selecting the k source domain clusters, as illustrated in Algorithm 1. Algorithm 1 Selecting K Source Clusters Input: A matrix R, K = number target domain classes, l= number of selected cluster. Initialize: l = 0, Normalize R such that each row sums up to 1. repeat: 1: Pick the column in R which has maximum sum of similarity scores for uncovered target domain instances. 2: Mark elements in the chosen column as covered. 3: l = l + 1 until: l = K Output: K source domain clusters. A source domain contributes partially in terms of zero or more class-labels (clusters) identified using the Algorithm 1. Therefore, we refer to the labeled instances from the selected clusters of a source domain as the partial transferable knowledge from that domain. This partial transferable knowledge from across multiple source domains is congregated to form an auxiliary training set, referred to as (AUX). 4.3 Adapting to the Target Domain The auxiliary training set comprises labeled instances from selected K source domain clusters4. Since, the auxiliary set is pulled out from multiple source domains, it follows different data distribution as compared to the target domain. For a classifier, trained on the K-class auxiliary training set, the distributional variations have to be normalized so that it can generalize well on the target domain. In this research, we proposed to use an instance weighting technique (Jiang and Zhai, 2007) to minimize the distributional variations by deferentially weighting instances in the auxiliary set. Intuitively, the auxiliary training instances similar to the target domain are assigned higher weights while training the classifier and vice versa. The weight for the ith instance in the auxiliary set should be proportional to the ratio (Pt(xi)) (Pa(xi)). However, since the actual probability distributions 4The K classes in auxiliary set induce class-separability in the target domain, however, the actual class-labels across these two may not have any sort of coarse-to-fine mapping. 1644 Algorithm 2 Cross-domain Classification with Disparate Label Sets Input: M source domains, target domain instances (ti), i = (1, ..., N), K = number of target domain classes. Process: Divide M sources into Q clusters s.t. Q = PM q=1|Ym|. Cq be the qth cluster & µq be its centroid computed as shown in Eq 1. A: Exploiting Multiple Sources: for i = 0 : till N do for q = 0 : till Q do R[i, q] = Sim(µq, ti) end for end for B: Extracting partial knowledge: 1: Pick K columns from R using Algorithm 1. 2: Construct AUX by congregating instances from the selected K source domain class-labels. C: Adapting to target domain: 1: Minimize distributional variations using instance weighing technique. 2: Train a K-class classifier using AUX. Output: K-class target domain classifier. (Pa(x) and Pt(x) for the auxiliary set and target domain respectively) are unknown, the instance difference is approximated as (Pt(xi|d=target)) (Pa(xi|d=auxilliary)), where d is a random variable used to represent whether xi came from the auxiliary set or the target domain. To calculate this ratio, a binary classifier is trained using the auxiliary set and target domain data with labels {-1} and {+1} respectively. The predicted probabilities from the classifier are used to estimate the ratio as the weight for the ith auxiliary instance xi. Finally, a K-class classifier is trained on the weighted auxiliary training set to perform classification on the target domain data. 4.4 Algorithm As shown in Figure 2, the step-by-step flow of the proposed algorithm is summarized below: 1. Divide M source domains into Q clusters, each represented as Cq, q = {1, 2, .., Q}. 2. Compute centroid of each cluster as the average of the cluster members, as shown in Eq. 1. µq = 1 ||Cq|| ||Cq|| X (i=1;xi∈Cq) xi (1) where µq is the centroid, ||Cq|| is the membership count and xi is the ith member of Cq. 3. For target instances ti ∀i ∈N, compute cosine similarity with all the source domain cluster centroids to form the matrix R (dimensions: N × Q), as shown in Eq. 2 R[i, q] = Sim(µq, ti) = µq · ti ||µq|| ||ti|| (2) 4. Run Algorithm 1 on R to select K optimal source clusters (i.e. columns of R). 5. Congregate labeled instances from the selected source domain clusters to form the Kclass auxiliary training set. 6. Minimize the divergence between the auxiliary set and target domain using the instance weighing technique, described in Section 4.3. 7. Finally, train a K-class classifier on deferentially weighted auxiliary training instances to perform classification in the target domain. The K-class classifier trained on the auxiliary training set is an SVM classifier (Chih-Wei Hsu and Lin, 2003) with L2 −loss from the LIBLINEAR library (Fan et al., 2008). The classifier used in the instance weighing technique is again an SVM classifier with RBF kernel. The proposed algorithm uses distributional embedding i.e. Doc2Vec (Le and Mikolov, 2014) to represent instances from the multiple source and target domains. We used an open-source implementation of Doc2Vec (Le and Mikolov, 2014) for learning 400 dimensional vector representation using DBoW. 5 Experimental Evaluation Comprehensive experiments are performed to evaluate the efficacy of the proposed algorithm for cross-domain classification with disparate label sets across domains on two datasets. 5.1 Datasets The first dataset is a real-world Online Social Media (OSM) dataset which consists of 74 collections. Each collection comprises comments/tweets that are collected based on user-defined keywords. These keywords are fed to a listening engine which crawls the social media (i.e. Twitter.com) and fetches comments matching the keywords. The task is to classify the comments in a collection 1645 Table 2: Illustrates variability in label sets across some collections from the OSM dataset. Apple iPhone 6 Apple iOS 8 Apple iPad mini3 Camera Locking apps & features Release date & Features Design Extensibility features Apple play & NFC Review link General features related marketing Apple sim card Apple Play/NFC Camera features Touch ID Comparison to Android Password with touch integration iPad mini3 disappoints Price Health & fitness app Apple watch Location & Maps Firmware updates Table 3: Table illustrates the collections from the EMPATH database used in this research. Collection ID Domain #Categories Coll 1 Huwaei 5 Coll 2 Healthcare 9 Coll 3 Whattsapp 8 Coll 4 Apple iOS 8 8 Coll 5 Apple iPhone 6 7 into user-defined categories. These user-defined categories may vary across collections in terms of count as well as their connotations. Table 2 shows an example of the user-defined categories for a few collections related to “Apple” products. In the experiments, one collection is used as unlabeled target collection and the remaining collections are used as the labeled source collections. We randomly selected 5 target collections to report the performance, as described in Table 3. The second dataset is the 20 Newsgroups (NG) (Lang, 1995) dataset which comprises 20, 000 news articles organized into 6 groups with different sub-groups both in terms of count as well as connotations, as shown in Figure 3(a). Two different experiments are performed on this dataset. In the first experiment (“Exp-1”), one group is considered as the target domain and the remaining 5 groups as the source domains. In the second experiment (“Exp-2”), one sub-group from each of the first five groups5 is randomly selected to synthesize a target domain while all the groups (with the remaining sub-groups) are used as source domains. Figure 3(b) shows an example on how to synthesize target domains in “Exp-2”. There are 720 possible target domains in this experiment and we report the average performance across all possible target domains, referred to as “Grp 7”. The task in both the experiments is to categorize the target domain into its K categories (sub-groups) using labeled data from multiple source domains. 5Group-6 has only 1 sub-group, therefore, it is considered for synthesizing target domain in the experiments. Figure 3: Illustrates (a) different groups (b) target domain synthesis (“EXP 2”) on the NG dataset. 5.2 Evaluation Metric The performance is reported in terms of classification accuracy on the target domain. There is no definite mapping between the actual class-labels in the target domain and the K categories (i.e. induced categories) in the auxiliary training set. Therefore, we sequentially evaluate all possible one-to-one mappings between the K categories in the auxiliary training set and target domain to report results for the best performing mapping. 5.3 Experimental Protocol The performance of the proposed algorithm is skylined by the in-domain performance (Gold), i.e. a classifier trained and tested on the labeled target domain data. We also compared the performance with spherical K-means clustering (Dhillon and Modha, 2001) used to group the target domain data into K categories against the ground truth, referred to CL. Spherical K-means clustering is based on cosine similarity and performs better for high-dimensional sparse data such as text. To compare with a baseline and an existing adaptation algorithm, we selected the most similar source domain6 with exactly K number of classlabels and report the performance on the best possible mapping, as described in Section 5.2. To compute the baseline (BL), a classifier trained on the source domain is used to categorize the target domain. A widely used domain adaptation algorithm, namely structural correspondence learning (SCL) (Blitzer et al., 2007) is also applied using the selected source domain. 6The most similar source domain is selected using proxyA distance (Blitzer et al., 2007) which has good correlation with domain adaptation performance. 1646 Figure 4: Compares the performance of different techniques on the OSM dataset. Table 4: Summarizes the performance of the proposed algorithm on the OSM dataset. Coll ID (#) BL CL SCL W/O Proposed Gold Coll 1 (5) 52.6 43.7 62.8 77.4 81.4 90.5 Coll 2 (9) 38.6 31.8 58.8 72.5 77.6 84.4 Coll 3 (8) 43.6 36.4 60.7 74.2 78.5 87.6 Coll 4 (8) 44.7 38.8 62.5 78.8 82.1 92.5 Coll 5 (7) 50.5 42.8 64.4 76.6 80.5 89.3 5.4 Results and Analysis Key observations and analysis from the experimental evaluations are summarized below: 5.4.1 Results on the OSM Dataset Results in Figure 4 and Table 4 show the efficacy of the proposed algorithm for cross-domain classification with disparate label sets as it outperforms other approaches by at least 15%. Coll ID(#) refers to the target collection and the corresponding count of class-labels. Results in Table 4 also compare the performance of the proposed technique without the distributional normalization of the auxiliary training set, referred to as “W/O”. Results suggest that suitably weighing instances from the auxiliary training set mitigates the distributional variations and enhances the cross-domain performance by at least 3.3%. 5.4.2 Results on the 20Newsgroups Dataset Results in Table 5 show that the proposed algorithm outperforms other techniques for both the experiments by at least 15 % and 18% respectively on the 20 Newsgroups dataset. In Table 5, “-” refers to the cases where a single source domain with the same number of class-labels as in the target domain is not available. In “Exp-1” where the source and target categories vary in terms of counts as well as their connotations, the proposed algorithm efficiently induces the classes in the unlabeled target domain using the partial transferable knowledge from multiple sources. For “Exp-2”, it is observed that the performance of the proposed algorithm is better than the performance in “Exp1” as the target categories have closely related categories (from the same group) in the source doFigure 5: Effects of selected source collections on the OSM dataset. Table 5: Summarizes the performance of the proposed algorithm on the 20Newsgroups dataset. Target(#) BL CL SCL W/O Proposed Gold Grp 1 (5) 48.6 79.4 80.8 85.6 Grp 2 (4) 62.7 50.2 62.7 78.3 83.6 89.2 Grp 3 (4) 64.3 54.8 64.4 81.6 85.3 90.4 Grp 4 (3) 69.6 55.6 67.3 82.2 86.4 92.5 Grp 5 (3) 69.7 56.4 70.3 83.6 85.3 91.2 Grp 7 (5) 52.8 84.6 88.4 93.8 mains. Table 5 reports the average performance across all the 720 possible combinations of target domains with a standard deviation of 2.6. 5.4.3 Effect of Multiple Source Domains Table 6 validates our assertion that multiple sources are necessary to induce class-separability in the target domain as a single source is not sufficient to cater to the heterogeneity of class-labels across domains. It also suggests that the proposed algorithm can learn class-separability in the target domain by using arbitrary diverse class-labels from different sources and does not necessarily require class-labels to follow any sort of coarse-tofine mapping across domains. To evaluate the effects of using multiple sources, further experiments were performed by varying the number of available source domains. For the OSM dataset, we varied the number of available source collections from 1 to 73 starting with the most similar source collection and repeatedly adding the next most similar collection in the pool of available collections. We observe that even the most similar collection was not independently sufficient to induce classes in the target collection and it was favorable to exploit multiple collections. Moreover, adding collections based on similarity to the target collection had a better likelihood of achieving higher performance as compared to adding random collections. In another experiment, we first identified the source collections which contributed to learning the target task. We removed these collections and applied the proposed algorithm on the remaining source collections. Figure 5 shows the perfor1647 Table 6: Actual target domain class-labels and the corresponding source domain class clusters used to build the auxiliary training set. Target Collection: Apple iOS 8 Associated Class-labels from multiple source collections Locking apps & security Anti Theft Features (Coll ID: 776 on Apple iOS 6 plus) Extensibility features Application update (Coll ID:720 on Apple iOS Features) General features related marketing General press (Coll ID: 163 on XBOX Issues) Camera features Camera (Coll ID: 775 on Apple iPhone 6 ) Password with touch integration Touch ID (Coll ID: 803 on Apple iPad mini3) Health & fitness app Reproductive health issues (Coll ID: 289 on Healthcare) Location & Maps Events (Coll ID: 502 on L’Oreal) Firmware updates Updates & patches (Coll ID: 478 on Riot Game Support v2) mance of the proposed algorithm on 5 such iterations of removing the contributing source collections from the previous iteration. We observed a significant drop in the performance with each iteration which signifies the effectiveness of the proposed algorithm in extracting highly discriminating transferable knowledge from multiple sources. 5.4.4 Comparing with Domain Adaptation We applied domain adaptation techniques considering the auxiliary training set to be a single source domain with the same number of classes as that in the target domain. We applied two of the widely used domain adaptation techniques, namely SCL (Blitzer et al., 2007) and SFA (Pan et al., 2010) referred to as “AuxSCL” and “AuxSFA” respectively. Results in Table 7 suggest that the proposed algorithm significantly outperforms “AuxSCL” and “AuxSFA” on the two datasets. Generally, existing domain adaptation techniques are built on the co-occurrences of the common features with the domain specific features and hence, capture how domain specific features in one domain behaves w.r.t to the domain specific features in the other domain. They assume homogeneous labels and expect the aligned features across domains to behave similarly for the prediction task. However, these features are misaligned when the label set across domains vary in terms of their connotations. 5.4.5 Effect of Different Representations The proposed algorithm uses Doc2Vec (Le and Mikolov, 2014) for representing instances from multiple domains. However, the proposed algorithm can build on different representations and hence, we compare its performance with traditional TF-IDF representation (including unigrams Table 7: Comparing the proposed algorithm with existing domain adaptation algorithms. Dataset Target SCL SFA Proposed OSM Coll 1 66.2 64.7 81.4 Coll 2 63.8 62.6 77.6 Coll 3 64,1 63.4 78.5 Coll 4 64.2 65.2 82.1 Coll 5 64.0 63.7 80.5 Grp 1 65.2 64.2 80.8 NG Grp 2 68.2 65.3 83.6 Exp-1 Grp 3 69.4 68.4 85.3 Grp 4 70.3 69.2 86.4 Grp 5 69.0 68.8 85.3 NG Exp-2 Grp 7 72.6 70.2 88.4 Table 8: Comparing different representations. Dataset Target TF-IDF TF-IDF +PCA Doc2Vec OSM Coll 1 70.6 76.8 81.4 Coll 2 69.5 74.2 77.6 Coll 3 70.2 75.5 78.5 Coll 4 71.6 77.9 82.1 Coll 5 70.8 76.8 80.5 Grp 1 71.8 75.6 80.8 NG Grp 2 73.6 77.5 83.6 Exp-1 Grp 3 77.4 81.1 85.3 Grp 4 76.6 82.5 86.4 Grp 5 75.5 81.4 85.3 NG Exp-2 Grp 7 76.2 83.6 88.4 and bigrams) and a dense representation using TFIDF+PCA ( reduced to a dimension such that it covers 90% of the variance). We observe that Doc2Vec representation clearly outperforms the other two representations as it addresses the drawbacks of bag-of n-gram models in terms of implicitly inheriting the semantics of the words in a document and offering a more generalizable concise vector representation. 6 Conclusions This paper presented the first study on crossdomain text classification in presence of multiple domains with disparate label sets and proposed a novel algorithm for the same. It proposed to extract partial transferable knowledge from across multiple source domains which was beneficial for inducing class-separability in the target domain. The transferable knowledge was assimilated in terms of selective labeled instances from different source domain to form a K-class auxiliary training set. Finally, a classifier was trained using this auxiliary training set, following a distribution normalizing instance weighing technique, to perform the classification task in the target domain. The efficacy of the proposed algorithm for cross-domain classification across disparate label sets will expand the horizon for ML-based algorithms to be more widely applicable in more general and practically observed scenarios. 1648 References Rie Kubota Ando and Tong Zhang. 2005. A highperformance semi-supervised learning method for text chunking. In Proceedings of Association for Computational Linguistics, pages 1–9. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning, 79(1-2):151–175. Himanshu Sharad Bhatt, Deepali Semwal, and Shourya Roy. 2015. An iterative similarity based adaptation technique for cross-domain text classification. In Proceedings of Conference on Natural Language Learning, pages 52–61. Himanshu Sharad Bhatt, Arun Rajkumar, and Shourya Roy. 2016. Multi-source iterative adaptation for cross-domain classification. In In Proceedings of International Joint Conference on Artificial Intelligence. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of Association for Computational Linguistics, pages 440– 447. Danushka Bollegala, David Weir, and John Carroll. 2013. Cross-domain sentiment classification using a sentiment sensitive thesaurus. IEEE Transactions on Knowledge and Data Engineering, 25(8):1719–1731. Rita Chattopadhyay, Qian Sun, Wei Fan, Ian Davidson, Sethuraman Panchanathan, and Jieping Ye. 2012. Multisource domain adaptation and its application to early detection of fatigue. ACM Transactions on Knowledge Discovery from Data, 6(4):1–26. Bo Chen, Wai Lam, Ivor Tsang, and Tak-Lam Wong. 2009. Extracting discriminative concepts for domain adaptation in text mining. In Proceedings of International Conference on Knowledge Discovery and Data Mining, pages 179–188. Chih-Chung Chang Chih-Wei Hsu and Chih-Jen Lin. 2003. A practical guide to support vector classification. Technical report, Department of Computer Science, National Taiwan University. Koby Crammer, Michael Kearns, and Jennifer Wortman. 2008. Learning from multiple sources. Journal of Machine Learning Research, 9(1):1757–1774. Wenyuan Dai, Gui-Rong Xue, Qiang Yang, and Yong Yu. 2007. Co-clustering based classification for out-ofdomain documents. In Proceedings of International Conference on Knowledge Discovery and Data Mining, pages 210–219. Hal Daum´e III. 2009. Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815. Inderjit S. Dhillon and Dharmendra S. Modha. 2001. Concept decompositions for large sparse text data using clustering. Machine Learning, 42(1):143–175. Lixin Duan, Ivor W. Tsang, Dong Xu, and Tat-Seng Chua. 2009. Domain adaptation from multiple sources via auxiliary classifiers. In Proceedings International Conference on Machine Learning, pages 289–296. Lixin Duan, Dong Xu, and Ivor Wai-Hung Tsang. 2012. Domain adaptation from multiple sources: a domaindependent regularization approach. IEEE Transactions on Neural Networks and Learning Systems, 23(3):504518. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: a library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. J. Jiang and C. Zhai. 2007. Instance weighting for domain adaptation in NLP. In Proceedings of Association for Computational Linguistics, volume 7, pages 264–271. Jing Jiang. 2009. Multi-task transfer learning for weaklysupervised relation extraction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1012–1020. Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015. New transfer learning techniques for disparate label sets. In Proceedings of Association for Computational Linguistics. Abhishek Kumar and Hal Daume III. 2012. Learning task grouping and overlap in multi-task learning. arXiv preprint arXiv:1206.6417. Ken Lang. 1995. NewsWeeder: Learning to filter netnews. In Proceedings of International Conference on Machine Learning. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning, pages 1188–1196. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation with multiple sources. In Advances in Neural Information Processing Systems, pages 1041–1048. Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. 2012. Sparse coding for multitask and transfer learning. arXiv preprint arXiv:1209.0738. Thomas M. Mitchell. 1997. Machine Learning. McGrawHill, Inc., New York, NY, USA, 1 edition. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on, 22(10):1345–1359. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of International Conference on World Wide Web, pages 751– 760. Sinno Jialin Pan, , Ivor W. Tsang, James T. Kwok, and Qiang Yang. 2011. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learning: Transfer learning from unlabeled data. In International Conference on Machine Learning, pages 759–766. 1649 Gabriele Schweikert and Christian Widmer. 2008. An empirical analysis of domain adaptation algorithms for genomic sequence analysis. In Advances in Neural Information Processing Systems, pages 1433–1440. Shi-Liang Sun and Hong-Lei Shi. 2013. Bayesian multisource domain adaptations. In International Conference on Machine Learning and Cybernetics, pages 24–28. Qian Sun, Rita Chattopadhyay, Sethuraman Panchanathan, and Jieping Ye. 2011. A two-stage weighting framework for multi-source domain adaptation. In Advances in Neural Information Processing Systems, pages 505–513. Shiliang Sun, Zhijie Xu, and Mo Yang. 2013. Transfer learning with part-based ensembles. In Multiple Classifier Systems, volume 7872, pages 271–282. Vijay V. Vazirani. 2003. Approximation Algorithms. Springer-Verlag Berlin Heidelberg. Zhijie Xu and Shiliang Sun. 2012. Multi-source transfer learning with multi-view adaboost. In Proceedings of International Conference on Neural Information Processing, pages 332–339. Linli Xu, Aiqing Huang, Jianhui Chen, and Enhong Chen. 2015. Exploiting task-feature co-clusters in multi-task learning. In Proceedings of Association for the Advancement of Artificial Intelligence, pages 1931–1937. Jun Yang, Rong Yan, and Alexander G. Hauptmann. 2007. Cross-domain video concept detection using adaptive svms. In Proceedings of International Conference on Multimedia, pages 188–197. 1650
2016
155
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1651–1660, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Morphological Smoothing and Extrapolation of Word Embeddings Ryan Cotterell Department of Computer Science Johns Hopkins University, USA [email protected] Hinrich Sch¨utze CIS LMU Munich, Germany [email protected] Jason Eisner Department of Computer Science Johns Hopkins University, USA [email protected] Abstract Languages with rich inflectional morphology exhibit lexical data sparsity, since the word used to express a given concept will vary with the syntactic context. For instance, each count noun in Czech has 12 forms (where English uses only singular and plural). Even in large corpora, we are unlikely to observe all inflections of a given lemma. This reduces the vocabulary coverage of methods that induce continuous representations for words from distributional corpus information. We solve this problem by exploiting existing morphological resources that can enumerate a word’s component morphemes. We present a latentvariable Gaussian graphical model that allows us to extrapolate continuous representations for words not observed in the training corpus, as well as smoothing the representations provided for the observed words. The latent variables represent embeddings of morphemes, which combine to create embeddings of words. Over several languages and training sizes, our model improves the embeddings for words, when evaluated on an analogy task, skip-gram predictive accuracy, and word similarity. 1 Introduction Representations of words as high-dimensional real vectors have been shown to benefit a wide variety of NLP tasks. Because of this demonstrated utility, many aspects of vector representations have been explored recently in the literature. One of the most interesting discoveries is that these representations capture meaningful morpho-syntactic and semantic properties through very simple linear relations: in a semantic vector space, we observe that vtalked −vtalk ≈vdrank −vdrink. (1) That this equation approximately holds across many morphologically related 4-tuples indicates bebieron comieron bebemos comemos Figure 1: A visual depiction of the vector offset method for morpho-syntactic analogies in R2. We expect bebieron and bebemos to have the same relation (vector offset shown as solid vector) as comieron and comemos. that the learned embeddings capture a feature of English morphology—adding the past tense feature roughly corresponds to adding a certain vector. Moreover, manipulating this equation yields what we will call the vector offset method (Mikolov et al., 2013c) for approximating other vectors. For instance, if we only know the vectors for the Spanish words comieron (ate), comemos (eat) and bebieron (drank), we can produce an approximation of the vector for bebemos (drink), as shown in Figure 1. Many languages exhibit much richer morphology than English. While English nouns commonly take two forms – singular and plural— Czech nouns take 12 and Turkish nouns take over 30. This increase in word forms per lemma creates considerable data sparsity. Fortunately, for many languages there exist large morphological lexicons, or better yet, morphological tools that can analyze any word form—meaning that we have analyses (usually accurate) for forms that were unobserved or rare in our training corpus. Our proposed method runs as a fast postprocessor (taking under a minute to process 100dimensional embeddings of a million observed word types) on the output of any existing tool that constructs word embeddings, such as WORD2VEC. 1651 Indicative Subjunctive Sg Pl Sg Pl 1 bebo bebemos beba bebamos 2 bebes beb´eis bebas beb´ais 3 bebe beben beba beban Table 1: The paradigm of the Spanish verb BEBER (to drink). The paradigm actually consists of > 40 word forms; only the present tense portion is shown here. In this output, some embeddings are noisy or missing, due to sparse training data. We correct these problems by using a Gaussian graphical model that jointly models the embeddings of morphologically related words. Inference under this model can smooth the noisy embeddings that were observed in the WORD2VEC output. In the limiting case of a word for which no embedding was observed (equivalent to infinite noise), inference can extrapolate one based on the observed embeddings of related words—a kind of global version of the vector offset method. The structure of our graphical model is defined using morphological lexicons, which supply analyses for each word form. We conduct a comprehensive study of our ability to modify and generate vectors across five languages. Our model also dramatically improves performance on the morphological analogy task in many cases: e.g., accuracy at selecting the nominative plural forms of Czech nouns is 89%, ten times better than the standard analogy approach. 2 Background: Inflectional Morphology Many languages require every verb token to be inflected for certain properties, such as person, number, tense, and mood. A verbal paradigm such as Table 1 lists all the inflected forms of a given verb. We may refer to this verb in the abstract by its lemma, BEBER—but when using it in a sentence, we must instead select from its paradigm the word type, such as beb´eis, that expresses the contextually appropriate properties. Noun tokens in a language may similarly be required to be inflected for properties such as case, gender, and number. A content word is chosen by specifying a lemma (which selects a particular paradigm) together with some inflectional attributes (which select a particular slot within that paradigm). For example, [Lemma=EAT, Person=3, Number=SINGULAR, Tense=PRESENT] is a bundle of attribute-value pairs that would be jointly expressed in English by the word form eats (Sylak-Glassman et al., 2015). The regularities observed by Mikolov et al. (2013c) hold between words with similar attributevalue pairs. In Spanish, the word beben “they drink” (Table 1) can be analyzed as expressing the bundle [Lemma=BEBER, Person=3, Number=PLURAL, Tense=PRESENT]. Its vector similarity to bebemos “we drink” is due to the fact that both word forms have the same lemma BEBER. Likewise, the vector similarity of beben to comieron “they ate” is due to the conceptual similarity of their lemmas, BEBER “drink” and COMER “eat”. Conversely, that beben is similar to preguntan “they ask” is caused by shared inflectional attributes [Person=3, Number=PLURAL, Tense=PRESENT]. Under cosine similarity, the most similar words are often related on both axes at once: e.g., one of the word forms closest to beben typically is comen “they eat”. 3 Approach Following this intuition, we fit a directed Gaussian graphical model (GGM) that simultaneously considers (i) each word’s embedding (obtained from an embedding model like WORD2VEC) and (ii) its morphological analysis (obtained from a lexical resource). We then use this model to smooth the provided embeddings, and to generate embeddings for unseen inflections. For a lemma covered by the resource, the GGM can produce embeddings for all its forms (if at least one of these forms has a known embedding); this can be extended to words not covered using a guesser like MORFESSOR (Creutz and Lagus, 2007) or CHIPMUNK (Cotterell et al., 2015a). A major difference of our approach from related techniques is that our model uses existing morphological resources (e.g., morphological lexicons or finite-state analyzers) rather than semantic resources (e.g., WordNet (Miller et al., 1990) and PPDB (Ganitkevitch et al., 2013)). The former tend to be larger: we often can analyze more words than we have semantic representations for. It would be possible to integrate our GGM into the training procedure for a word embedding system, making that system sensitive to morphological attributes. However, the postprocessing approach in our present paper lets us use any existing word embedding system as a black box. It is simple to implement, and turns out to get excellent results, which will presumably improve further as 1652 veats vran weats wate wruns wran wrunning minfl=vbg minfl=vbd mlem=eat minfl=vbp weating mlem=run veating Figure 2: A depiction of our directed Gaussian graphical model (GGM) for the English verbal paradigm. Each variable represents a vector in Rn; thus, this is not the traditional presentation of a GGM in which each node would be a single realvalued random variable, but each node represents a real-valued random vector. The shaded nodes vi at the bottom are observed word embeddings. The nodes wi at the middle layer are smoothed or extrapolated word embeddings. The nodes mk at the top are latent embeddings of morphemes. better black boxes become available. 4 A Generative Model Figure 2 draws our GGM’s structure as a Bayes net. In this paper, we loosely use the term “morpheme” to refer to an attribute-value pair (possibly of the form Lemma=. . . ). Let M be the set of all morphemes. In our model, each morpheme k ∈M has its own latent embedding mk ∈Rn. These random variables are shown as the top layer of Figure 2. We impose an IID spherical Gaussian prior on them (similar to L2 regularization with strength λ > 0): mk ∼N(0, λ−1I), ∀k (2) Let L be the lexicon of all word types that appear in our lexical resource. (The noun and verb senses of bat are separate entries in L.) In our model, each word i ∈L has a latent embedding wi ∈Rn. These random variables are shown as the middle layer of Figure 2. We assume that each wi is simply a sum of the mk for its component morphemes Mi ⊆M (shown in Figure 2 as wi’s parents), plus a Gaussian perturbation: wi ∼N( X k∈Mi mk, Σi), ∀i (3) This perturbation models idiosyncratic usage of word i that is not predictable from its morphemes. The covariance matrix Σi is shared for all words i with the same coarse POS (e.g., VERB). Our system’s output will be a guess of all of the wi. Our system’s input consists of noisy estimates vi for some of the wi, as provided by a black-box word embedding system run on some large corpus C. (Current systems estimate the same vector for both senses of bat.) These observed random variables are shown as the bottom layer of Figure 2. We assume that the black-box system would have recovered the “true” wi if given enough data, but instead it gives a noisy small-sample estimate vi ∼N(wi, 1 ni Σ′ i), ∀i (4) where ni is the count of word i in training corpus C. This formula is inspired by the central limit theorem, which guarantees that vi’s distribution would approach (4) (as ni →∞) if it were estimated by averaging a set of ni noisy vectors drawn IID from any distribution with mean wi (the truth) and covariance matrix Σ′ i. A system like WORD2VEC does not precisely do that, but it does choose vi by aggregating (if not averaging) the influences from the contexts of the ni tokens. The parameters λ, Σi, Σ′ i now have likelihood p(v) = Z p(v, w, m) dw dm, where (5) p(v, w, m) = Y k∈M p(mk) · Y i∈L p(wi |mk : k ∈Mi) · p(vi |wi) (6) Here m = {mk : k ∈M} represents the collection of all latent morpheme embeddings, and similarly w = {wi : i ∈L} and v = {vi : i ∈L}. We take p(vi | wi) = 1 if no observation vi exists. How does the model behave qualitatively? If ˆwi is the MAP estimate of wi, then ˆwi →vi as ni →∞, but ˆwi →P k∈Mi mk as ni →0. This 1653 is because (3) and (4) are in tension; when ni is small, (4) is weaker and we get more smoothing. The morpheme embeddings mk are largely determined from the observed embeddings vi of the frequent words (since mk aims via (2)–(3) to explain wi, which ≈vi when i is frequent). That determines the compositional embedding P k∈Mi mk toward which the wi of a rarer word is smoothed (away from vi). If vi is not observed or if ni = 0, then ˆwi = P k∈Mi mk exactly. 5 Inference Suppose first that the model parameters are known, and we want to reconstruct the latent vectors wi. Because the joint density p(v, w, m) in (6) is a product of (sometimes degenerate) Gaussian densities, it is itself a highly multivariate Gaussian density over all elements of all vectors.1 Thus, the posterior marginal distribution of each wi is Gaussian as well. A good deal is known about how to exactly compute these marginal distributions of a Gaussian graphical model (e.g., by matrix inversion) or at least their means (e.g., by belief propagation) (Koller and Friedman, 2009). For this paper, we adopt a simpler method— MAP estimation of all latent vectors. That is, we seek the w, m that jointly maximize (6). This is equivalent to minimizing X k λ||mk||2 2 + X i ||wi − X k∈Mi mk||2 Σi + X i ||vi −wi||2 Σ′ i/ni, (7) which is a simple convex optimization problem.2 We apply block coordinate descent until numerical convergence, in turn optimizing each vector mk or wi with all other vectors held fixed. This finds the global minimum (convex objective) and is extremely fast even when we have over a hundred million real variables. Specifically, we update mk ← λI + X i∈Wk Σ i −1 X i∈Wk Σ i(wi − X j∈Mi,j̸=k mj), where Σ def= Σ−1 is the inverse covariance matrix and Wk def= {i : k ∈Mi}. This updates mk so 1Its inverse covariance matrix is highly sparse: its pattern of non-zeros is related to the graph structure of Figure 2. (Since the graphical model in Figure 2 is directed, the inverse covariance matrix has a sparse Cholesky decomposition that is even more directly related to the graph structure.) 2By definition, ||x||2 A def = xT Ax. the partial derivatives of (7) with respect to the components of mk are 0. In effect, this updates mk to a weighted average of several vectors. Morpheme k participates in words i ∈Wk, so its vector mk is updated to the average of the contributions (wi −P j∈Mi,j̸=k mj) that mk would ideally make to the embeddings wi of those words. The contribution of wi is “weighted” by the inverse covariance matrix Σ i. Because of prior (2), 0 is also included in the average, “weighted” by λI. Similarly, the update rule for wi is wi ←(ni Σ ′ i + Σ i)−1ni Σ ′ ivi + Σ i X k∈Mi mk  , which can similarly be regarded as a weighted average of the observed and compositional representations.3 See Appendix C for the derivations. 6 Parameter Learning We wish to optimize the model parameters λ, Σi, Σ′ i by empirical Bayes. That is, we do not have a prior on these parameters, but simply do maximum likelihood estimation. A standard approach is the Expectation-Maximization or EM algorithm (Dempster et al., 1977) to locally maximize the likelihood. This alternates between reconstructing the latent vectors given the parameters (E step) and optimizing the parameters given the latent vectors (M step). In this paper, we use the Viterbi approximation to the E step, that is, MAP inference as described in section 5. Thus, our overall method is Viterbi EM. As all conditional probabilities in the model are Gaussian, the M step has closed form. MLE estimation of a covariance matrix is a standard result—in our setting the update to Σi takes the form: Σc ←1 Nc X i:c∈C(i) (wi− X k∈Mi mk)(wi− X k∈Mi mk)T , where C(i) are i’s POS tags, Nc = |{i|c ∈C(i)}| and Σc is the matrix for the cth POS tag (the matrices are tied by POS). In this paper we simply fix Σ′ i = I rather than fitting it.4 Also, we tune the hyperparameter λ on a development set, using grid search over the values {0.1, 0.5, 1.0}. 3If vi is not observed, take ni = 0. In fact it is not necessary to represent this wi during optimization. Simply omit i from all Wk. After convergence, set wi ←P k∈Mi mk. 4Note that it is not necessary to define it as λ′I, introducing a new scale parameter λ′, since doubling λ′ would have the same effect on the MAP update rules as halving λ and Σi. 1654 Viterbi EM can be regarded as block coordinate descent on the negative log-likelihood function, with E and M steps both improving this common objective along different variables. We update the parameters (M step above) after each 10 passes of updating the latent vectors (section 5’s E step). 7 Related Work Our postprocessing strategy is inspired by Faruqui et al. (2015), who designed a retrofitting procedure to modify pre-trained vectors such that their relations match those found in semantic lexicons. We focus on morphological resources, rather than semantic lexicons, and employ a generative model. More importantly, in addition to modifying vectors of observed words, our model can generate vectors for forms not observed in the training data. Wieting et al. (2015) compute compositional embeddings of phrases, with their simplest method being additive (like ours) over the phrase’s words. Their embeddings are tuned to fit observed phrase similarity scores from PPDB (Ganitkevitch et al., 2013), which allows them to smooth and extend PPDB just as we do to WORD2VEC output. Using morphological resources to enhance embeddings at training time has been examined by numerous authors. Luong et al. (2013) used MORFESSOR (Creutz and Lagus, 2007), an unsupervised morphological induction algorithm, to segment the training corpus. They then trained a recursive neural network (Goller and Kuchler, 1996; Socher, 2014) to generate compositional word embeddings. Our model is much simpler and faster to train. Their evaluation was limited to English and focused on rare English words. dos Santos and Zadrozny (2014) introduced a neural tagging architecture (Collobert et al., 2011) with a characterlevel convolutional layer. Qiu et al. (2014) and Botha and Blunsom (2014) both use MORFESSOR segmentations to augment WORD2VEC and a logbilinear (LBL) language model (Mnih and Hinton, 2007), respectively. Similar to us, they have an additive model of the semantics of morphemes, i.e., the embedding of the word form is the sum of the embeddings of its constituents. In contrast to us, however, both include the word form itself in the sum. Finally, Cotterell and Sch¨utze (2015) jointly trained an LBL language model and a morphological tagger (Hajiˇc, 2000) to encourage the embeddings to encode rich morphology. With the exception of (Cotterell and Sch¨utze, 2015), all of the above methods use unsupervised methods to infuse word embeddings with morphology. Our approach is supervised in that we use a morphological lexicon, i.e., a manually built resource. Our model is also related to other generative models of real vectors common in machine learning. The simplest of them is probabilistic principal component analysis (Roweis, 1998; Tipping and Bishop, 1999), a generative model of matrix factorization that explains a set of vectors via latent low-dimensional vectors. Probabilistic canonical correlation analysis similarly explains a set of pairs of vectors (Bach and Jordan, 2005). Figure 2 has the same topology as our graphical model in (Cotterell et al., 2015b). In that work, the random variables were strings rather than vectors. Morphemes were combined into words by concatenating strings rather than adding vectors, and then applying a stochastic edit process (modeling phonology) rather than adding Gaussian noise. 8 Experiments We perform three experiments to test the ability of our model to improve on WORD2VEC. To reiterate, our approach does not generate or analyze a word’s spelling. Rather, it uses an existing morphological analysis of a word’s spelling (constructed manually or by a rule-based or statistical system) as a resource to improve its embedding. In our first experiment, we attempt to identify a corpus word that expresses a given set of morphological attributes. In our second experiment, we attempt to use a word’s embedding to predict the words that appear in its context, i.e., the skip-gram objective of Mikolov et al. (2013a). Our third example attempts to use word embeddings to predict human similarity judgments. We experiment on 5 languages: Czech, English, German, Spanish and Turkish. For each language, our corpus data consists of the full Wikipedia text. Table 5 in Appendix A reports the number of types and tokens and their ratio. The lexicons we use are characterized in Table 6: MorfFlex CZ for Czech (Hajiˇc and Hlav´aˇcov´a, 2013), CELEX for English and German (Baayen et al., 1993) and lexicons for Spanish and Turkish that were scraped from Wiktionary by Sylak-Glassman et al. (2015). Given a finite training corpus C and a lexicon L,5 we generate embeddings vi for all word 5L is finite in our experiments. It could be infinite (though still incomplete) if a morphological guesser were used. 1655 Nom Sg Nom Pl Gen Sg Gen Pl Dat Sg Dat Pl Acc Sg Acc Pl Ins Sg Ins Pl Voc Sg Voc Pl Voc Pl Voc Sg Ins Pl Ins Sg Acc Pl Acc Sg Dat Pl Dat Sg Gen Pl Gen Sg Nom Pl Nom Sg GGM 4.7 0 0 3.5 1.3 5.5 2.2 0 2.1 3.7 0 – 0 0 0 0 0 5 5 0 0.83 0 – 0 2 6 3 2.5 1.7 4.3 3.6 7.7 0.69 – 0 4.6 4 1.6 1.6 2.3 2.7 0.36 1.6 2.4 – 0.18 0.83 2.9 1.3 15 1.2 6.2 2.6 2.6 2.2 – 2.8 2.7 0 0 3.9 2.7 0.74 2.1 1.7 0 – 2.4 0.58 2.5 5 2.2 3.8 4.8 2.4 8 0.86 – 0.33 7.2 1 2.7 0 5 2 2 0 3.3 – 0 1.3 0.61 2.8 0 0 2 1.8 4.9 4.8 – 4.7 2.5 3.3 7.2 3.1 0.92 1.7 3.7 2 1.7 – 2.9 0.67 0.5 0.98 0.5 1.3 1.6 0 0.33 4.6 – 1.7 3.4 1.6 2.2 2.3 10 1.9 7 0 0 – 4.4 2.6 1.7 3.9 0.33 2.1 1.6 3.5 0.59 0 5.2 48 89 81 43 30 33 36 87 33 48 49 89 1ps Ps 2ps Ps 3ps Ps 1pp Ps 2pp Ps 3pp Ps 1ps Pt 2ps Pt 3ps Pt 1pp Pt 2pp Pt 3pp Pt 3pp Pt 2pp Pt 1pp Pt 3ps Pt 2ps Pt 1ps Pt 3pp Ps 2pp Ps 1pp Ps 3ps Ps 2ps Ps 1ps Ps GGM 7.3 1.3 7.2 3.4 0 13 1.4 0 37 1.8 0 – 0 0 0 0 0 0 0 0 0 0 – 0 1.7 0 3.3 0 0 2.8 1.7 0 3.4 – 0 2 7.6 0 13 1.3 0 7.2 0.86 0 – 3.5 0 36 0 0 0 0 0 0 0 – 0 0 0 0 0.14 0.5 2 3.2 0 0.33 – 0 1.4 0.83 0 1.7 1.6 0 30 4 0 – 0.24 0 8 4.4 0 14 0 0 0 0 – 0 0 0 0 0 0 0 0.24 0 1.9 – 0 2.7 2.1 0 2.6 0 0 2.2 1.2 9.3 – 1.2 0 29 1.7 3.3 13 2.5 0 8.2 0 – 14 0.5 5 0.091 0.83 0 0.17 0 0 0.28 – 0 0.79 2.1 0 0.33 0.83 0 3.4 1 0 3 8.8 4.5 30 14 10 49 9.4 8.1 41 9.5 0 43 Table 2: The two tables show how the Gaussian graphical model (GGM) compares to various analogies on Czech nouns (left) and Spanish verbs (right).The numbers in each cell represent the accuracy (larger is better). The columns represent the inflection of the word i to be predicted. Our GGM model is the top row. The other rows subdivide the baseline analogy results according to the inflection of source word a. Abbreviations: in the Czech noun table (left), the first word indicates the case and the second the number, e.g., Dat Sg = Dative Singular. In the Spanish verb table (right), the first word is the person and number and the second the tense, e.g., 3pp Pt = 3rd-person plural past. types i ∈C, using the GENSIM implementation ( ˇReh˚uˇrek and Sojka, 2010) of the WORD2VEC hierarchical softmax skip-gram model (Mikolov et al., 2013a), with a context size of 5. We set the dimension n to 100 for all experiments.6 We then apply our GGM to generate smoothed embeddings wi for all word types i ∈C ∩L. (Recall that the noun and verb sense of bats are separate types in L, even if conflated in C, and get separate embeddings.) How do we handle other word types? For an out-of-vocabulary (OOV) test word i ̸∈C, we will extrapolate wi ←P k∈Mi mk on demand, as the GGM predicts, provided i ∈L. If any of these morphemes mk were themselves never seen in C, we back off to the mode of the prior to take mk = 0.7 Our experiments also encounter out-of-lexicon (OOL) test words i ̸∈L, for which we have no morphological analysis; here we take wi = vi (unsmoothed) if i ∈C and wi = 0 otherwise. 8.1 Experiment 1: Extrapolation vs. Analogy Our first set of experiments uses embeddings for word selection. Our prediction task is to identify the unique word i ∈C that expresses the 6An additional important hyperparameter is the number of epochs. The default value in the GENSIM package is 5, which is suitable for larger corpora. We use this value for Experiments 1 and 3. Experiment 2 involves training on smaller corpora and we found it necessary to set the number of epochs to 10. 7One could in principle learn “backoff morphemes.” For instance, if borogoves is analyzed as [ Lemma=OOV NOUN, Num=PL ], we might want mLemma=OOV NOUN ̸= 0 to represent novel nouns. morphological attributes Mi. To do this, we predict a target embedding x, and choose the most similar unsmoothed word by cosine distance, ˆı = argmaxj∈C vj · x. We are scored correct if ˆı = i. Our experimental design ensures that i ̸∈L, since if it were, we could trivially find i simply by consulting L. The task is to identify missing lexical entries, by exploiting the distributional properties in C.8 Given the input bundle Mi, our method predicts the embedding x = P k∈Mi mk, and so looks for a word j ∈C whose unsmoothed embedding vj ≈x. The GGM’s role here is to predict that the bundle Mi will be realized by something like x. The baseline method is the analogy method of equation (1). This predicts the embedding x via the vector-offset formula va + (vb −vc), where a, b, c ∈C ∩L are three other words sharing i’s coarse part of speech such that Mi can be expressed as Ma + (Mb −Mc).9 Specifically, the baseline chooses a, b, c uniformly at random from all possibilities. (This is not too inefficient: given a, at most one choice of (b, c) is possible.) Note that the baseline extrapolates from the unsmoothed embeddings of 3 other words, whereas the GGM considers all words in C ∩L that share i’s morphemes. 8The argmax selection rule does not exploit the fact that the entry is missing: it is free to incorrectly return some ˆı ∈ L. 9More formally, Mi = Ma + (Mb −Mc), if we define M by (M)k = I(k ∈M) for all morphemes k ∈M. This converts morpheme bundle M to a {0, 1} indicator vector M over M. 1656 analogy GGM analogy GGM analogy GGM analogy GGM analogy GGM InsP .067 .343⋆ AccS .131 .582⋆ VBD .052 .202⋆ MascS Part .178 .340⋆ LocS .002 .036⋆ GenS .059 .632⋆ NomS .136 .571⋆ VBG .06 .102⋆ FemP Part .230 .286 AblS .001 .019⋆ GenP .078 .242⋆ DatP .074 .447⋆ NN .051 .080⋆ FemS Part .242 .235 DatS .001 .037⋆ NomP .076 .764⋆ AccP .075 .682⋆ VBN .052 .218⋆ AdjS .201 .286⋆ AccS .001 .023⋆ NomS .066 .290⋆ GenP .075 .666⋆ NNS .052 .114⋆ GerundS .186 .449⋆ InsS .001 .004 VocP .063 .789⋆ NomP .064 .665⋆ VB .056 .275⋆ GerundP .172 .311⋆ NomS .001 .023⋆ Czech German English Spanish Turkish nouns nouns nouns & verbs nouns, verbs & adj’s nouns Table 3: Test results for Experiment 1. The rows indicate the inflection of the test word i to be predicted (superscript P indicates plural, superscript S singular). The columns indicate the prediction method. Each number is an average over 10 training-test splits. Improvements marked with a ⋆are statistically significant (p < 0.05) under a paired permutation test over these 10 runs. Experimental Setup: A lexical resource consists of pairs (word form i, analysis Mi). For each language, we take a random 80% of these pairs to serve as the training lexicon L that is seen by the GGM. The remaining pairs are used to construct our prediction problems (given Mi, predict i), with a random 10% each as dev and test examples. We compare our method against the baseline method on ten such random training-test splits. We are releasing all splits for future research. For some dev and test examples, the baseline method has no choice of the triple a, b, c. Rather than score these examples as incorrect, our baseline results do not consider them at all (which inflates performance). For each remaining example, to reduce variance, the baseline method reports the average performance on up to 100 a, b, c triples sampled uniformly without replacement. The automatically created analogy problems (a, b, c →i) solved by the baseline are similar to those of Mikolov et al. (2013c). However, most previous analogy evaluation sets evaluate only on 4-tuples of frequent words (Nicolai et al., 2015), to escape the need for smoothing, while ours also include infrequent words. Previous evaluation sets also tend to be translations of the original English datasets—leaving them impoverished as they therefore only test morpho-syntactic properties found in English. E.g., the German analogy problems of K¨oper et al. (2015) do not explore the four cases and two numbers in the German adjectival system. Thus our baseline analogy results are useful as a more comprehensive study of the vector offset method for randomly sampled words. Results: Overall results for 5 languages are shown in Table 3. Additional rows break down performance by the inflection of the target word i. (The inflections shown are the ones for which the baseline method is most accurate.) For almost all target inflections, GGM is significantly better than the analogy baseline. An extreme case is the vocative plural in Czech, for which GGM predicts vectors better by more than 70%. In other cases, the margin is slimmer; but GGM loses only on predicting the Spanish feminine singular participle. For Czech, German, English and Spanish the results are clear—GGM yields better predictions. This is not surprising as our method incorporates information from multiple morphologically related forms. More detailed results for two languages are given in Table 2. Here, each row constrains the source word a to have a certain inflectional tag; again we average over up to 100 analogies, now chosen under this constraint, and again we discard a test example i from the test set if no such analogy exists. The GGM row considers all test examples. Past work on morphosyntactic analogies has generally constrained a to be the unmarked (lemma) form (Nicolai et al., 2015). However, we observe that it is easier to predict one word form from another starting from a form that is “closer” in morphological space. For instance, it is easier to predict Czech forms inflected in the genitive plural from forms in nominative plural, rather than the nominative singular. Likewise, it is easier to predict a singular form from another singular form rather than from a plural form. It also is easier to predict partially syncretic forms, i.e., two inflected forms that share the same orthographic string; e.g., in Czech the nominative plural and the accusative plural are identical for inanimate nouns. 1657 105 106 107 108 2000 4000 6000 8000 10000 Perplexity Per Word English 105 106 107 0 5000 10000 15000 20000 25000 30000 35000 Czech 105 106 107 108 1000 2000 3000 4000 5000 6000 7000 8000 Spanish [0, ∞)-U [0, 1)-U [1, 10)-U [10, 20)-U [0, ∞)-S [0, 1)-S [1, 10)-S [10, 20)-S Figure 3: Results for the WORD2VEC skip-gram objective score (perplexity per predicted context word) on a held-out test corpus. The x-axis measures the size in tokens of the training corpus used to generate the model. We plot the held-out perplexity for the skip-gram model with Unsmoothed observed vectors v (solid e) and Smoothed vectors w (barred c). The thickest, darkest curves show aggregate performance. The thinner, lighter versions show a breakdown according to whether the predicting word’s frequency in the smallest training corpus falls in the range [0, 1), [1, 10), or [10, 20) (from lightest to darkest and roughly from top to bottom). (These are the words whose representations we smooth; footnote 10 explains why we do not smooth the predicted context word.) We do not show [20, ∞) since WORD2VEC randomly removes some tokens of high-frequency words (“subsampling”), similar in spirit to removing stop words. See Appendix B for more graphs. 8.2 Experiment 2: Held-Out Evaluation We now evaluate the smoothed and extrapolated representations wi. Fundamentally, we want to know if our approach improves the embeddings of the entire vocabulary, as if we had seen more evidence. But we cannot simply compare our smoothed vectors to “gold” vectors trained on much more data, since two different runs of WORD2VEC will produce incomparable embedding schemes. We must ask whether our embeddings improve results on a downstream task. To avoid choosing a downstream task with a narrow application, we evaluate our embedding using the WORD2VEC skip-gram objective on held-out data—as one would evaluate a language model. If we believe that a better score on the WORD2VEC objective indicates generally more useful embeddings—which indeed we do as we optimize for it—then improving this score indicates that our smoothed vectors are superior. Concretely, the objective is X s X t X j∈[t−5,t+5],j̸=t log2 pword2vec(Tsj |Tst), (8) where Ts is the sth sentence in the test corpus, t indexes its tokens, and j indexes tokens near t. The probability model pword2vec is defined in Eq. (3) of (Mikolov et al., 2013b). It relies on an embedding of the word form Tst.10 Our baseline approach 10In the hierarchical softmax version, it also relies on a separate embedding for a variable-length bit-string encoding of the context word Tsj. Unfortunately, we do not currently know of a way to smooth these bit-string encodings (also found by WORD2VEC). However, it might be possible to directly incorporate morphology into the construction of the vocabulary tree that defines the bit-strings. simply uses WORD2VEC’s embeddings (or 0 for OOV words Tst ̸∈C). Our GGM approach substitutes “better” embeddings when Tst appears in the lexicon L (if Tst is ambiguous, we use the mean wi vector from all i ∈L with spelling Tst). Note that (8) is itself a kind of task of predicting words in context, resembling language modeling or a “cloze” task. Also, Taddy (2015) showed how to use this objective for document classification. Experimental Setup: We evaluate GGM on the same 5 languages, but now hold out part of the corpus instead of part of the lexicon. We take the training corpus C to be the initial portion of Wikipedia of size 105, 106, 107 or 108. (We skip the 108 case for the smaller datasets: Czech and Turkish). The 107 tokens after that are the dev corpus; the next 107 tokens are the test corpus. Results: We report results on three languages in Figure 3 and all languages in Appendix B. Smoothing from vi to wi helps a lot, reducing perplexity by up to 48% (Czech) with 105 training tokens and up to 10% (Spanish) even with 108 training tokens. This roughly halves the perplexity, which in the case of 105 training tokens, is equivalent to 8× more training data. This is a clear win for lower-resource languages. We get larger gains from smoothing the rarer predicting words, but even words with frequency ≥10−4 benefit. (The exception is Turkish, where the large gains are confined to rare predicting words.) See Appendix B for more analysis. 1658 English German Spanish Forms / Lemma 1.8 6.3 8.1 Skip-Gram 58.9 36.2 37.8 GGM 58.9 37.6 40.3 Table 4: Word similarity results (correlations) using the WS353 dataset in the three languages, in which it is available. Since all the words in WS-353 are lemmata, we report the average inflected form to lemma ratio for forms appearing in the datasets. 8.3 Experiment 3: Word Similarity As a third and final experiment, we consider word similarity using the WS-353 data set (Finkelstein et al., 2001), translated into Spanish (Hassan and Mihalcea, 2009) and German (Leviant, 2016).11 The datasets are composed of 353 pairs of words. Multiple native speakers were then asked to give an integral value between 1 and 10 indicating the similarity of that pair, and those values were then averaged. In each case, we train the GGM on the whole Wikipedia corpus for the language. Since in each language every word in the WS-353 set is in fact a lemma, we use the latent embedding our GGM learns in the experiment. In Spanish, for example, we use the learned latent morpheme embedding for the lemma BEBER (recall this takes information from every element in the paradigm, e.g., bebemos and beben), rather than the embedding for the infinitival form beber. In highly inflected languages we expect this to improve performance, because to get the embedding of a lemma, it leverages the distributional signal from all inflected forms of that lemma, not just a single one. Note that unlike previous retrofitting approaches, we do not introduce new semantic information into the model, but rather simply allow the model to better exploit the distributional properties already in the text, by considering words with related lemmata together. In essence, our approach embeds a lemma as the average of all words containing that lemma, after “correcting” those forms by subtracting off their other morphemes (e.g., inflectional affixes). Results: As is standard in the literature, we report Spearman’s correlation cofficient ρ between the averaged human scores and the cosine distance between the embeddings. We report results in Table 4. We additionally report the average num11This dataset has yet to be translated into Czech or Turkish, nor are there any comparable resources in these languages. ber of forms per lemma. We find that we improve performance on the Spanish and German datasets over the original skip-gram vectors, but only equal the performance on English. This is not surprising as German and Spanish have roughly 3 and 4 times more forms per lemma than English. We speculate that cross-linguistically the GGM will improve word similarity scores more for languages with richer morphology. 9 Conclusion and Future Work For morphologically rich languages, we generally will not observe, even in a large corpus, a high proportion of the word forms that exist in lexical resources. We have presented a Gaussian graphical model that exploits lexical relations documented in existing morphological resources to smooth vectors for observed words and extrapolate vectors for new words. We show that our method achieves large improvements over strong baselines for the tasks of morpho-syntactic analogies and predicting words in context. Future work will consider the role of derivational morphology in embeddings as well as noncompositional cases of inflectional morphology. Acknowledgments The first author was funded by a DAAD LongTerm Research Grant. This work was also partially supported by Deutsche Forschungsgemeinschaft (grat SCHU-2246/2-2 WordGraph) and by the U.S. National Science Foundation under Grant No. 1423276. References R Harald Baayen, Richard Piepenbrock, and Rijn van H. 1993. The CELEX lexical data base on CDROM. Francis R Bach and Michael I Jordan. 2005. A probabilistic interpretation of canonical correlation analysis. Technical report, UC Berkeley. Jan A. Botha and Phil Blunsom. 2014. Compositional Morphology for Word Representations and Language Modelling. In ICML. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12. Ryan Cotterell and Hinrich Sch¨utze. 2015. Morphological word embeddings. In NAACL. 1659 Ryan Cotterell, Thomas M¨uller, Alexander Fraser, and Hinrich Sch¨utze. 2015a. Labeled morphological segmentation with semi-Markov models. In CoNLL. Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015b. Modeling word forms using latent underlying morphs and phonology. TACL, 3. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing, 4(1):3. Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, pages 1–38. Cıcero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In ICML. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In NAACL. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In WWW. ACM. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In NAACL. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. Neural Networks. Jan Hajiˇc and Jaroslava Hlav´aˇcov´a. 2013. MorfFlex CZ. LINDAT/CLARIN digital library at Institute of Formal and Applied Linguistics, Charles University in Prague. Jan Hajiˇc. 2000. Morphological tagging: Data vs. dictionaries. In NAACL. Samer Hassan and Rada Mihalcea. 2009. Crosslingual semantic relatedness using encyclopedic knowledge. In EMNLP. Daphne Koller and Nir Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. MIT press. Maximilian K¨oper, Christian Scheible, and Sabine Schulte im Walde. 2015. Multilingual reliability and semantic structure of continuous word spaces. In IWCS. Ira Leviant. 2016. Separated by an Un-common Language: Towards Judgment Language Informed Vector Space Modeling. Ph.D. thesis, Technion. Minh-Thang Luong, Richard Socher, and C Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In ICLR Workshop. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In NAACL. George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. 1990. Introduction to WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235–244. Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In ICML. Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Morpho-syntactic regularities in continuous word representations: A multilingual study. In Proceedings of Workshop on Vector Space Modeling for NLP, pages 129–134. Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Liu TieYan. 2014. Co-learning of word representations and morpheme representations. In COLING. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45– 50. Sam Roweis. 1998. EM algorithms for PCA and SPCA. In NIPS. Richard Socher. 2014. Recursive Deep Learning for Natural Language Processing and Computer Vision. Ph.D. thesis, Stanford University. John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A language-independent feature schema for inflectional morphology. In ACL. Matt Taddy. 2015. Document classification by inversion of distributed language representations. In ACL. Michael E Tipping and Christopher M Bishop. 1999. Probabilistic principal component analysis. Journal of the Royal Statistical Society B, 61(3):611–622. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Dan Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. 1660
2016
156
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1661–1670, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Cross-lingual Models of Word Embeddings: An Empirical Comparison Shyam Upadhyay1 Manaal Faruqui2 Chris Dyer2 Dan Roth1 1 Department of Computer Science, University of Illinois, Urbana-Champaign, IL, USA 2 School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA [email protected], [email protected] [email protected], [email protected] Abstract Despite interest in using cross-lingual knowledge to learn word embeddings for various tasks, a systematic comparison of the possible approaches is lacking in the literature. We perform an extensive evaluation of four popular approaches of inducing cross-lingual embeddings, each requiring a different form of supervision, on four typologically different language pairs. Our evaluation setup spans four different tasks, including intrinsic evaluation on mono-lingual and cross-lingual similarity, and extrinsic evaluation on downstream semantic and syntactic applications. We show that models which require expensive cross-lingual knowledge almost always perform better, but cheaply supervised models often prove competitive on certain tasks. 1 Introduction Learning word vector representations using monolingual distributional information is now a ubiquitous technique in NLP. The quality of these word vectors can be significantly improved by incorporating cross-lingual distributional information (Klementiev et al., 2012; Zou et al., 2013; Vuli´c and Moens, 2013b; Mikolov et al., 2013b; Faruqui and Dyer, 2014; Hermann and Blunsom, 2014; Chandar et al., 2014, inter alia), with improvements observed both on monolingual (Faruqui and Dyer, 2014; Rastogi et al., 2015) and cross-lingual tasks (Guo et al., 2015; Søgaard et al., 2015; Guo et al., 2016). Several models for inducing cross-lingual embeddings have been proposed, each requiring a different form of cross-lingual supervision – some can use document-level alignments (Vuli´c and Moens, 2015), others need alignments at the sentence (Hermann and Blunsom, 2014; Gouws et al., 2015) or word level (Faruqui and Dyer, 2014; Gouws and Søgaard, 2015), while some require both sentence and word alignments (Luong et al., 2015). However, a systematic comparison of these models is missing from the literature, making it difficult to analyze which approach is suitable for a particular NLP task. In this paper, we fill this void by empirically comparing four cross-lingual word embedding models each of which require different form of alignment(s) as supervision, across several dimensions. To this end, we train these models on four different language pairs, and evaluate them on both monolingual and cross-lingual tasks.1 First, we show that different models can be viewed as instances of a more general framework for inducing cross-lingual word embeddings. Then, we evaluate these models on both extrinsic and intrinsic tasks. Our intrinsic evaluation assesses the quality of the vectors on monolingual (§4.2) and cross-lingual (§4.3) word similarity tasks, while our extrinsic evaluation spans semantic (cross-lingual document classification §4.4) and syntactic tasks (cross-lingual dependency parsing §4.5). Our experiments show that word vectors trained using expensive cross-lingual supervision (word alignments or sentence alignments) perform the best on semantic tasks. On the other hand, for syntactic tasks like cross-lingual dependency parsing, models requiring weaker form of cross-lingual supervision (such as context agnostic translation dictionary) are competitive to models requiring expensive supervision. We also show qualitatively how the nature of cross-lingual supervision used to train word vectors affects the proximity of translation pairs across languages, and of words with similar meaning in the same language in the vector-space. 1Instructions and code to reproduce the experiments available at http://cogcomp.cs.illinois.edu/ page/publication_view/794 1661 Algorithm 1 General Algorithm 1: Initialize W ←W0,V ←V0 2: (W∗, V∗) ←arg min αA(W) + βB(V) + C(W, V) Figure 1: (Above) A general schema for induction of crosslingual word vector representations. The word vector model generates embeddings which incorporates distributional information cross-lingually. (Below) A general algorithm for inducing bilingual word embeddings, where α, β, W0, V0 are parameters and A, B, C are suitably defined losses. 2 Bilingual Embeddings A general schema for inducing bilingual embeddings is shown in Figure 1. Our comparison focuses on dense, fixed-length distributed embeddings which are obtained using some form of cross-lingual supervision. We briefly describe the embedding induction procedure for each of the selected bilingual word vector models, with the aim to provide a unified algorithmic perspective for all methods, and to facilitate better understanding and comparison. Our choice of models spans across different forms of supervision required for inducing the embeddings, illustrated in Figure 2. Notation. Let W = {w1, w2, . . . , w|W|} be the vocabulary of a language l1 with |W| words, and W ∈R|W|×l be the corresponding word embeddings of length l. Let V = {v1, v2, . . . , v|V |} be the vocabulary of another language l2 with |V | words, and V ∈R|V |×m the corresponding word embeddings of length m. We denote the word vector for a word w by w. 2.1 Bilingual Skip-Gram Model (BiSkip) Luong et al. (2015) proposed Bilingual SkipGram, a simple extension of the monolingual skipgram model, which learns bilingual embeddings by using a parallel corpus along with word alignments (both sentence and word level alignments). The learning objective is a simple extension of the skip-gram model, where the context of a word is expanded to include bilingual links obtained from word alignments, so that the model is trained to predict words cross-lingually. In particular, given a word alignment link from word v ∈V in language l2 to w ∈W in language l1, the model predicts the context words of w using v and vice-versa. Formally, the cross lingual part of the objective is, D12(W, V) = − X (v,w)∈Q X wc∈NBR1(w) log P(wc | v) (1) where NBR1(w) is the context of w in language l1, Q is the set of word alignments, and P(wc | v) ∝ exp(wT c v). Another similar term D21 models the objective for v and NBR2(v). The objective can be cast into Algorithm 1 as, C(W, V) = D12(W, V) + D21(W, V) (2) A(W) = − X w∈W X wc∈NBR1(w) log P(wc | w) (3) B(V) = − X v∈V X vc∈NBR2(v) log P(vc | v) (4) where A(W) and B(V) are the familiar skipgram formulation of the monolingual part of the objective. α and β are chosen hyper-parameters which set the relative importance of the monolingual terms. 2.2 Bilingual Compositional Model (BiCVM) Hermann and Blunsom (2014) present a method that learns bilingual word vectors from a sentence aligned corpus. Their model leverages the fact that aligned sentences have equivalent meaning, thus their sentence representations should be similar. We denote two aligned sentences, ⃗v = ⟨x1, . . . , ⟩and ⃗w = ⟨y1, . . .⟩, where xi ∈ V, yi ∈W, are vectors corresponding to the words in the sentences. Let functions f : ⃗v →Rn and g : ⃗w →Rn, map sentences to their semantic representations in Rn. BiCVM generates word vectors by minimizing the squared ℓ2 norm between the sentence representations of aligned sentences. In order to prevent the degeneracy arising from directly minimizing the ℓ2 norm, they use a noise-contrastive large-margin update, with randomly drawn sentence pairs (⃗v, ⃗wn) as negative samples. The loss for the sentence pairs (⃗v, ⃗w) and (⃗v, ⃗wn) can be written as, E(⃗v, ⃗w, ⃗wn) = max (δ + ∆E(⃗v, ⃗w, ⃗wn), 0) (5) where, E(⃗v, ⃗w) = ∥f(⃗v) −g(⃗w)∥2 (6) 1662 I Love You Je t’ aime (a) BiSkip I Love You Je t’ aime (b) BiCVM (I, Je) (Love, aime) (You, t’) (c) BiCCA Hello! how are you? I Love You. Bonjour! Je t’ aime. (d) BiVCD Figure 2: Forms of supervision required by the four models compared in this paper. From left to right, the cost of the supervision required varies from expensive (BiSkip) to cheap (BiVCD). BiSkip requires a parallel corpus annotated with word alignments (Fig. 2a), BiCVM requires a sentence-aligned corpus (Fig. 2b), BiCCA only requires a bilingual lexicon (Fig. 2c) and BiVCD requires comparable documents (Fig. 2d). and, ∆E(⃗v, ⃗w, ⃗wn) = E(⃗v, ⃗w) −E(⃗v, ⃗wn) (7) This can be cast into Algorithm 1 by, C(W, V) = X aligned (⃗v,⃗w) random ⃗wn E(⃗v, ⃗w, ⃗wn) (8) A(W) = ∥W∥2 B(V) = ∥V∥2 (9) with A(W) and B(V) being regularizers, with α = β. 2.3 Bilingual Correlation Based Embeddings (BiCCA) The BiCCA model, proposed by Faruqui and Dyer (2014), showed that when (independently trained) monolingual vector matrices W, V are projected using CCA (Hotelling, 1936) to respect a translation lexicon, their performance improves on word similarity and word analogy tasks. They first construct W′ ⊆W, V′ ⊆V such that |W′|= |V′| and the corresponding words (wi, vi) in the matrices are translations of each other. The projection is then computed as: PW , PV = CCA(W′, V′) (10) W∗= WPW V∗= VPV (11) where, PV ∈Rl×d, PW ∈Rm×d are the projection matrices with d ≤min(l, m) and the V∗∈ R|V |×d, W∗∈R|W|×d are the word vectors that have been “enriched” using bilingual knowledge. The BiCCA objective can be viewed2 as the following instantiation of Algorithm 1: W0 = W′, V0 = V′ (12) C(W, V) = ∥W −V∥2+γ VT W  (13) A(W) = ∥W∥2−1 B(V) = ∥V∥2−1 (14) where W = W0PW and V = V0PV , where we set α = β = γ = ∞to set hard constraints. 2described in Section 6.5 of (Hardoon et al., 2004) 2.4 Bilingual Vectors from Comparable Data (BiVCD) Another approach of inducing bilingual word vectors, which we refer to as BiVCD, was proposed by Vuli´c and Moens (2015). Their approach is designed to use comparable corpus between the source and target language pair to induce crosslingual vectors. Let de and df denote a pair of comparable documents with length in words p and q respectively (assume p > q). BiVCD first merges these two comparable documents into a single pseudobilingual document using a deterministic strategy based on length ratio of two documents R = ⌊p q⌋. Every Rth word of the merged pseudo-bilingual document is picked sequentially from df. Finally, a skip-gram model is trained on the corpus of pseudo-bilingual documents, to generate vectors for all words in W∗∪V∗. The vectors constituting W∗and V∗can then be easily identified. Instantiating BiVCD in the general algorithm is obvious: C(W, V) assumes the familiar word2vec skip-gram objective over the pseudobilingual document, C(W, V) = − X s∈W∪V X t∈NBR(s) log P(t | s) (15) where NBR(s) is defined by the pseudo-bilingual document and P(t | s) ∝exp(tT s). Note that t, s ∈W ∪V . Although BiVCD is designed to use comparable corpus, we provide it with parallel data in our experiments (to ensure comparability), and treat two aligned sentences as comparable. 3 Data We train cross-lingual embeddings for 4 language pairs: English-German (en-de), English-French (en-fr), English-Swedish (en-sv) and EnglishChinese (en-zh). For en-de and en-sv we use the 1663 l1 l2 #sent #l1-words #l2-words en de 1.9 53 51 fr 2.0 55 61 sv 1.7 46 42 zh 2.0 58 50 Table 1: The size of parallel corpora (in millions) of different language pairs used for training cross-lingual word vectors. Europarl v7 parallel corpus3 (Koehn, 2005). For en-fr, we use Europarl combined with the newscommentary and UN-corpus dataset from WMT 2015.4 For en-zh, we use the FBIS parallel corpus from the news domain (LDC2003E14). We use the Stanford Chinese Segmenter (Tseng et al., 2005) to preprocess the en-zh parallel corpus. Corpus statistics for all languages is shown in Table 1. 4 Evaluation We measure the quality of the induced crosslingual word embeddings in terms of their performance, when used as features in the following tasks: • monolingual word similarity for English • Cross-lingual dictionary induction • Cross-lingual document classification • Cross-lingual syntactic dependency parsing The first two tasks intrinsically measure how much can monolingual and cross-lingual similarity benefit from cross-lingual training. The last two tasks measure the ability of cross-lingually trained vectors to extrinsically facilitate model transfer across languages, for semantic and syntactic applications respectively. These tasks have been used in previous works (Klementiev et al., 2012; Luong et al., 2015; Vuli´c and Moens, 2013a; Guo et al., 2015) for evaluating cross-lingual embeddings, but no comparison exists which uses them in conjunction. To ensure fair comparison, all models are trained with embeddings of size 200. We provide all models with parallel corpora, irrespective of their requirements. Whenever possible, we also report statistical significance of our results. 3www.statmt.org/europarl/v7/{de, sv}-en.tgz 4www.statmt.org/wmt15/ translation-task.html 4.1 Parameter Selection We follow the BestAvg parameter selection strategy from Lu et al. (2015): we selected the parameters for all models by tuning on a set of values (described below) and picking the parameter setting which did best on an average across all tasks. BiSkip. All models were trained using a window size of 10 (tuned over {5, 10, 20}), and 30 negative samples (tuned over {10, 20, 30}). The cross-lingual weight was set to 4 (tuned over {1, 2, 4, 8}). The word alignments for training the model (available at github. com/lmthang/bivec) were generated using fast_align (Dyer et al., 2013). The number of training iterations was set to 5 (no tuning) and we set α = 1 and β = 1 (no tuning). BiCVM. We use the tool (available at github. com/karlmoritz/bicvm) released by Hermann and Blunsom (2014) to train all embeddings. We train an additive model (that is, f(⃗x) = g(⃗x) = P i xi) with hinge loss margin set to 200 (no tuning), batch size of 50 (tuned over 50, 100, 1000) and noise parameter of 10 (tuned over {10, 20, 30}). All models are trained for 100 iterations (no tuning). BiCCA. First, monolingual word vectors are trained using the skip-gram model5 with negative sampling (Mikolov et al., 2013a) with window of size 5 (tuned over {5, 10, 20}). To generate a cross-lingual dictionary, word alignments are generated using cdec from the parallel corpus. Then, word pairs (a, b), a ∈l1, b ∈l2 are selected such that a is aligned to b the most number of times and vice versa. This way, we obtained dictionaries of approximately 36k, 35k, 30k and 28k word pairs for en-de, en-fr, en-sv and en-zh respectively. The monolingual vectors are aligned using the above dictionaries with the tool (available at github.com/mfaruqui/eacl14-cca) released by Faruqui and Dyer (2014) to generate the cross-lingual word embeddings. We use k = 0.5 as the number of canonical components (tuned over {0.2, 0.3, 0.5, 1.0}). Note that this results in a embedding of size 100 after performing CCA. BiVCD. We use word2vec’s skip gram model for training our embeddings, with a window size of 5 (tuned on {5, 10, 20, 30}) and negative sampling parameter set to 5 (tuned on {5, 10, 25}). Every pair of parallel sentences is treated as a 5code.google.com/p/word2vec 1664 pair of comparable documents, and merging is performed using the sentence length ratio strategy described earlier.6 4.2 Monolingual Evaluation We first evaluate if the inclusion of cross-lingual knowledge improves the quality of English embeddings. Word Similarity. Word similarity datasets contain word pairs which are assigned similarity ratings by humans. The task evaluates how well the notion of word similarity according to humans is emulated in the vector space. Evaluation is based on the Spearman’s rank correlation coefficient (Myers and Well, 1995) between human rankings and rankings produced by computing cosine similarity between the vectors of two words. We use the SimLex dataset for English (Hill et al., 2014) which contains 999 pairs of English words, with a balanced set of noun, adjective and verb pairs. SimLex is claimed to capture word similarity exclusively instead of WordSim353 (Finkelstein et al., 2001) which captures both word similarity and relatedness. We declare significant improvement if p < 0.1 according to Steiger’s method (Steiger, 1980) for calculating the statistical significant differences between two dependent correlation coefficients. Table 2 shows the performance of English embeddings induced by all the models by training on different language pairs on the SimLex word similarity task. The score obtained by monolingual English embeddings trained on the respective English side of each language is shown in column marked Mono. In all cases (except BiCCA on ensv), the bilingually trained vectors achieve better scores than the mono-lingually trained vectors. Overall, across all language pairs, BiCVM is the best performing model in terms of Spearman’s correlation, but its improvement over BiSkip and BiVCD is often insignificant. It is notable that 2 of the 3 top performing models, BiCVM and BiVCD, need sentence aligned and document-aligned corpus only, which are easier to obtain than parallel data with word alignments required by BiSkip. QVEC. Tsvetkov et al. (2015) proposed an intrinsic evaluation metric for estimating the quality of English word vectors. The score produced by QVEC measures how well a given set of word vectors is able to quantify linguistic properties 6We implemented the code for performing the merging as we could not find a tool provided by the authors. pair Mono BiSkip BiCVM BiCCA BiVCD en-de 0.29 0.34 0.37 0.30 0.32 en-fr 0.30 0.35 0.39 0.31 0.36 en-sv 0.28 0.32 0.34 0.27 0.32 en-zh 0.28 0.34 0.39 0.30 0.31 avg. 0.29 0.34 0.37 0.30 0.33 Table 2: Word similarity score measured in Spearman’s correlation ratio for English on SimLex-999. The best score for each language pair is shown in bold. Scores which are significantly better (per Steiger’s Method with p < 0.1) than the next lower score are underlined. For example, for en-zh, BiCVM is significantly better than BiSkip, which in turn is significantly better than BiVCD. pair Mono BiSkip BiCVM BiCCA BiVCD en-de 0.39 0.40 0.31 0.33 0.37 en-fr 0.39 0.40 0.31 0.33 0.38 en-sv 0.39 0.39 0.31 0.32 0.37 en-zh 0.40 0.40 0.32 0.33 0.38 avg. 0.39 0.40 0.31 0.33 0.38 Table 3: Intrinsic evaluation of English word vectors measured in terms of QVEC score across models. Best scores for each language pair is shown in bold. of words, with higher being better. The metric is shown to have strong correlation with performance on downstream semantic applications. As it can be currently only used for English, we use it to evaluate the English vectors obtained using cross-lingual training of different models. Table 3 shows that on average across language pairs, BiSkip achieves the best score, followed by Mono (mono-lingually trained English vectors), BiVCD and BiCCA. A possible explanation for why Mono scores are better than those obtained by some of the cross-lingual models is that QVEC measures monolingual semantic content based on a linguistic oracle made for English. Cross-lingual training might affect these semantic properties arbitrarily. Interestingly, BiCVM which was the best model according to SimLex, ranks last according to QVEC. The fact that the best models according to QVEC and word similarities are different reinforces observations made in previous work that performance on word similarity tasks alone does not reflect quantification of linguistic properties of words (Tsvetkov et al., 2015; Schnabel et al., 2015). 1665 l1 l2 BiSkip BiCVM BiCCA BiVCD en de 79.7 74.5 72.4 62.5 fr 78.9 72.9 70.1 68.8 sv 77.1 76.7 74.2 56.9 zh 69.4 66.0 59.6 53.2 avg. 76.3 72.5 69.1 60.4 Table 4: Cross-lingual dictionary induction results (top-10 accuracy). The same trend was also observed across models when computing MRR (mean reciprocal rank). 4.3 Cross-lingual Dictionary Induction The task of cross-lingual dictionary induction (Vuli´c and Moens, 2013a; Gouws et al., 2015; Mikolov et al., 2013b) judges how good crosslingual embeddings are at detecting word pairs that are semantically similar across languages. We follow the setup of Vuli´c and Moens (2013a), but instead of manually creating a gold cross-lingual dictionary, we derived our gold dictionaries using the Open Multilingual WordNet data released by Bond and Foster (2013). The data includes synset alignments across 26 languages with over 90% accuracy. First, we prune out words from each synset whose frequency count is less than 1000 in the vocabulary of the training data from §3. Then, for each pair of aligned synsets s1 = {k1, k2, · · ·} s2 = {g1, g2, · · ·}, we include all elements from the set {(k, g) | k ∈s1, g ∈s2} into the gold dictionary, where k and g are the lemmas. Using this approach we generated dictionaries of sizes 1.5k, 1.4k, 1.0k and 1.6k pairs for en-fr, en-de, en-sv and en-zh respectively. We report top-10 accuracy, which is the fraction of the entries (e, f) in the gold dictionary, for which f belongs to the list of top-10 neighbors of the word vector of e, according to the induced cross-lingual embeddings. From the results (Table 4), it can be seen that for dictionary induction, the performance improves with the quality of supervision. As we move from cheaply supervised methods (eg. BiVCD) to more expensive supervision (eg. BiSkip), the accuracy improves. This suggests that for cross lingual similarity tasks, the more expensive the cross-lingual knowledge available, the better. Models using weak supervision like BiVCD perform poorly in comparison to models like BiSkip and BiCVM, with performance gaps upwards of 10 pts on an average. l1 l2 BiSkip BiCVM BiCCA BiVCD en de 85.2 85.0 79.1 79.9 fr 77.7 71.7 70.7 72.0 sv 72.3 69.1 65.3 59.9 zh 75.5 73.6 69.4 73.0 de en 74.9 71.1 64.9 74.1 fr 80.4 73.7 75.5 77.6 sv 73.4 67.7 67.0 78.2 zh 81.1 76.4 77.3 80.9 avg. 77.6 73.5 71.2 74.5 Table 5: Cross-lingual document classification accuracy when trained on language l1, and evaluated on language l2. The best score for each language is shown in bold. Scores which are significantly better (per McNemar’s Test with p < 0.05) than the next lower score are underlined. For example, for sv→en, BiVCD is significantly better than BiSkip, which in turn is significantly better than BiCVM. 4.4 Cross-lingual Document Classification We follow the cross-lingual document classification (CLDC) setup of Klementiev et al. (2012), but extend it to cover all of our language pairs. We use the RCV2 Reuters multilingual corpus7 for our experiments. In this task, for a language pair (l1, l2), a document classifier is trained using the document representations derived from word embeddings in language l1, and then the trained model is tested on documents from language l2 (and vice-versa). By using supervised training data in one language and evaluating without further supervision in another, CLDC assesses whether the learned cross-lingual representations are semantically coherent across multiple languages. All embeddings are learned on the data described in §3, and we only use the RCV2 data to learn document classification models. Following previous work, we compute document representation by taking the tf-idf weighted average of vectors of the words present in it.8 A multi-class classifier is trained using an averaged perceptron (Freund and Schapire, 1999) for 10 iterations, using the document vectors of language l1 as features9. Majority baselines for en →l2 and l1 →en are 49.7% and 46.7% respectively, for all languages. Table 5 shows the performance of different models across different language pairs. We computed confidence values using the McNemar test (McNe7http://trec.nist.gov/data/reuters/ reuters.html 8tf-idf (Salton and Buckley, 1988) was computed using all documents for that language in RCV2. 9We use the implementation of Klementiev et al. (2012). 1666 mar, 1947) and declare significant improvement if p < 0.05. Table 5 shows that in almost all cases, BiSkip performs significantly better than the remaining models. For transferring semantic knowledge across languages via embeddings, sentence and word level alignment proves superior to sentence or word level alignment alone. This observation is consistent with the trend in cross-lingual dictionary induction, where too the most expensive form of supervision performed the best. 4.5 Cross-lingual Dependency Parsing Using cross lingual similarity for direct-transfer of dependency parsers was first shown in T¨ackstr¨om et al. (2012). The idea behind direct-transfer is to train a dependency parsing model using embeddings for language l1 and then test the trained model on language l2, replacing embeddings for language l1 with those of l2. The transfer relies on coherence of the embeddings across languages arising from the cross lingual training. For our experiments, we use the cross lingual transfer setup of Guo et al. (2015).10 Their framework trains a transition-based dependency parser using nonlinear activation function, with the source-side embeddings as lexical features. These embeddings can be replaced by target-side embeddings at test time. All models are trained for 5000 iterations with fixed word embeddings during training. Since our goal is to determine the utility of word embeddings in dependency parsing, we turn off other features that can capture distributional information like brown clusters, which were originally used in Guo et al. (2015). We use the universal dependency treebank (McDonald et al., 2013) version2.0 for our evaluation. For Chinese, we use the treebank released as part of the CoNLL-X shared task (Buchholz and Marsi, 2006). We first evaluate how useful the word embeddings are in cross-lingual model transfer of dependency parsers (Table 6). On an average, BiCCA does better than other models. BiSkip is a close second, with an average performance gap of less than 1 point. BiSkip outperforms BiCVM on German and French (over 2 point improvement), owing to word alignment information BiSkip’s model uses during training. It is not surprising that English-Chinese transfer scores are low, due to the significant difference in syntactic structure of the 10github.com/jiangfeng1124/ acl15-clnndep l1 l2 BiSkip BiCVM BiCCA BiVCD en de 49.8 47.5 51.3 49.0 fr 65.8 63.2 65.9 60.7 sv 56.9 56.7 59.4 54.6 zh 6.4 6.1 6.4 6.0 de en 49.7 45.0 50.3 43.6 fr 53.3 50.6 54.2 49.5 sv 48.2 49.0 49.9 44.6 zh 0.17 0.12 0.17 0.15 avg. 41.3 39.8 42.2 38.5 Table 6: Labeled attachment score (LAS) for cross-lingual dependency parsing when trained on language l1, and evaluated on language l2. The best score for each language is shown in bold. two languages. Surprisingly, unlike the semantic tasks considered earlier, the models with expensive supervision requirements like BiSkip and BiCVM could not outperform a cheaply supervised BiCCA. We also evaluate whether using cross-lingually trained vectors for learning dependency parsers is better than using mono-lingually trained vectors in Table 7. We compare against parsing models trained using mono-lingually trained word vectors (column marked Mono in Table 7). These vectors are the same used as input to the BiCCA model. All other settings remain the same. On an average across language pairs, improvement over the monolingual embeddings was obtained with the BiSkip and BiCCA models, while BiCVM and BiVCD consistently performed worse. A possible reason for this is that BiCVM and BiVCD operate on sentence level contexts to learn the embeddings, which only captures the semantic meaning of the sentences and ignores the internal syntactic structure. As a result, embedding trained using BiCVM and BiVCD are not informative for syntactic tasks. On the other hand, BiSkip and BiCCA both utilize the word alignment information to train their embeddings and thus do better in capturing some notion of syntax. 5 Qualitative Analysis Figure 3 shows the PCA projection of some of the most frequent words in the English-French corpus. It is clear that BiSkip and BiCVM produce crosslingual vectors which are the most comparable, the English and French words which are translations of each other are represented by almost the same point in the vector-space. In BiCCA and BiVCD 1667 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 0.4 0.2 0.0 0.2 0.4 0.6 0.8 market world country energy problem law money children peace life war pays marché problème monde vie énergie enfants paix argent guerre loi (a) BiSkip 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 market law country problem energy money life children world peace war pays problème marché vie énergie argent monde enfants paix loi guerre (b) BiCVM 0.4 0.2 0.0 0.2 0.4 0.6 0.8 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 market world country energy problem law money children peace life war pays marché problème monde vie énergie enfants paix argent guerre loi (c) BiCCA 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 market world country energy problem law money children peace life war pays marché problème monde vie énergie enfants paix guerre argentloi (d) BiVCD Figure 3: PCA projection of word embeddings of some frequent words present in English-French corpus. English and French words are shown in blue and red respectively. l Mono BiSkip BiCVM BiCCA BiVCD de 71.1 72.0 60.4 71.4 58.9 fr 78.9 80.4 73.7 80.2 69.5 sv 75.5 78.2 70.5 79.0 64.5 zh 73.8 73.1 65.8 71.7 67.0 avg. 74.8 75.9 67.6 75.6 66.8 Table 7: Labeled attachment score (LAS) for dependency parsing when trained and tested on language l. Mono refers to parser trained with mono-lingually induced embeddings. Scores in bold are better than the Mono scores for each language, showing improvement from cross-lingual training. the translated words are more distant than BiSkip and BiCVM. This is not surprising because BiSkip and BiCVM require more expensive supervision at the sentence level in contrast to the other two models. An interesting observation is that BiCCA and BiVCD are better at separating antonyms. The words peace and war, (and their French translations paix and guerre) are well separated in BiCCA and BiVCD. However, in BiSkip and BiCVM these pairs are very close together. This can be attributed to the fact that BiSkip and BiCVM are trained on parallel sentences, and if two antonyms are present in the same sentence in English, they will also be present together in its French translation. However, BiCCA uses bilingual dictionary and BiVCD use comparable sentence context, which helps in pulling apart the synonyms and antonyms. 6 Discussion The goal of this paper was to formulate the task of learning cross-lingual word vector representations in a unified framework, and conduct experiments to compare the performance of existing 1668 models in a unbiased manner. We chose existing cross-lingual word vector models that can be trained on two languages at a given time. In recent work, Ammar et al. (2016) train multilingual word vectors using more than two languages; our comparison does not cover this setting. It is also worth noting that we compare here different crosslingual word embeddings, which are not to be confused with a collection of monolingual word embeddings trained for different languages individually (Al-Rfou et al., 2013). The paper does not cover all approaches that generate cross-lingual word embeddings. Some methods do not have publicly available code (Coulmance et al., 2015; Zou et al., 2013); for others, like BilBOWA (Gouws et al., 2015), we identified problems in the available code, which caused it to consistently produced results that are inferior even to mono-lingually trained vectors.11 However, the models that we included for comparison in our survey are representative of other cross-lingual models in terms of the form of crosslingual supervision required by them. For example, BilBOWA (Gouws et al., 2015) and crosslingual Auto-encoder (Chandar et al., 2014) are similar to BiCVM in this respect. Multi-view CCA (Rastogi et al., 2015) and deep CCA (Lu et al., 2015) can be viewed as extensions of BiCCA. Our choice of models was motivated to compare different forms of supervision, and therefore, adding these models, would not provide additional insight. 7 Conclusion We presented the first systematic comparative evaluation of cross-lingual embedding methods on several downstream NLP tasks, both intrinsic and extrinsic. We provided a unified representation for all approaches, showing them as instances of a general algorithm. Our choice of methods spans a diverse range of approaches, in that each requires a different form of supervision. Our experiments reveal interesting trends. When evaluating on intrinsic tasks such as monolingual word similarity, models relying on cheaper forms of supervision (such as BiVCD) perform almost on par with models requiring expensive supervision. On the other hand, for cross-lingual semantic tasks, like cross-lingual document classification and dictionary induction, the model with the most informative supervision performs best 11We contacted the authors of the papers and were unable to resolve the issues in the toolkit. overall. In contrast, for the syntactic task of dependency parsing, models that are supervised at a word alignment level perform slightly better. Overall this suggests that semantic tasks can benefit more from richer cross-lingual supervision, as compared to syntactic tasks. Acknowledgement This material is based on research sponsored by DARPA under agreement number FA8750-13-2-0008 and Contract HR0011-15-2-0025. Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proc. of CoNLL. Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual wordnet. In Proc. of ACL. Sabine Buchholz and Erwin Marsi. 2006. Conll-X shared task on multilingual dependency parsing. In Proc. of CoNLL. Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Proc. of NIPS. Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Transgram, fast cross-lingual word-embeddings. In Proc. of EMNLP. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proc. of NAACL. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proc. of EACL. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: the concept revisited. In Proc. of WWW. Yoav Freund and Robert E Schapire. 1999. Large margin classification using the perceptron algorithm. Machine learning, 37(3):277–296. 1669 Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In Proc. of NAACL. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proc. of ICML. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proc. of ACL. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learning framework for multi-source transfer parsing. In Proc. of AAAI. David R Hardoon, Sandor Szedmak, and John ShaweTaylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neural computation, 16(12):2639–2664. Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual Models for Compositional Distributional Semantics. In Proc. of ACL. Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv preprint arXiv:1408.3456. Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proc. of COLING. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. of MT Summit. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual correlation for improved word embeddings. In Proc. of NAACL. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proc. of the Workshop on Vector Space Modeling for NLP. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proc. of ACL. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Jerome L. Myers and Arnold D. Well. 1995. Research Design & Statistical Analysis. Routledge. Pushpendre Rastogi, Benjamin Van Durme, and Raman Arora. 2015. Multiview LSA: Representation learning via generalized CCA. In Proceedings of NAACL. Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information Processing and Management. Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proc. of EMNLP. Anders Søgaard, ˇZeljko Agi´c, H´ector Mart´ınez Alonso, Barbara Plank, Bernd Bohnet, and Anders Johannsen. 2015. Inverted indexing for cross-lingual nlp. In Proc. of ACL. James H Steiger. 1980. Tests for comparing elements of a correlation matrix. Psychological bulletin, 87(2):245. O. T¨ackstr¨om, R. McDonald, and J. Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In NAACL. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proc. of SIGHAN. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proc. of EMNLP. Ivan Vuli´c and Marie-Francine Moens. 2013a. Crosslingual semantic similarity of words as the similarity of their semantic word responses. In Proc. of NAACL. Ivan Vuli´c and Marie-Francine Moens. 2013b. A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else). In Proc. of EMNLP. Ivan Vuli´c and Marie-Francine Moens. 2015. Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction. In Proc. of ACL. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proc. of EMNLP. 1670
2016
157
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1671–1682, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Take and Took, Gaggle and Goose, Book and Read: Evaluating the Utility of Vector Differences for Lexical Relation Learning Ekaterina Vylomova,1 Laura Rimell,2 Trevor Cohn,1 and Timothy Baldwin1 1Department of Computing and Information Systems, University of Melbourne 2Computer Laboratory, University of Cambridge [email protected] [email protected] {tcohn,tbaldwin}@unimelb.edu.au Abstract Recent work has shown that simple vector subtraction over word embeddings is surprisingly effective at capturing different lexical relations, despite lacking explicit supervision. Prior work has evaluated this intriguing result using a word analogy prediction formulation and hand-selected relations, but the generality of the finding over a broader range of lexical relation types and different learning settings has not been evaluated. In this paper, we carry out such an evaluation in two learning settings: (1) spectral clustering to induce word relations, and (2) supervised learning to classify vector differences into relation types. We find that word embeddings capture a surprising amount of information, and that, under suitable supervised training, vector subtraction generalises well to a broad range of relations, including over unseen lexical items. 1 Introduction Learning to identify lexical relations is a fundamental task in natural language processing (“NLP”), and can contribute to many NLP applications including paraphrasing and generation, machine translation, and ontology building (Banko et al., 2007; Hendrickx et al., 2010). Recently, attention has been focused on identifying lexical relations using word embeddings, which are dense, low-dimensional vectors obtained either from a “predict-based” neural network trained to predict word contexts, or a “countbased” traditional distributional similarity method combined with dimensionality reduction. The skipgram model of Mikolov et al. (2013a) and other similar language models have been shown to perform well on an analogy completion task (Mikolov et al., 2013b; Mikolov et al., 2013c; Levy and Goldberg, 2014a), in the space of relational similarity prediction (Turney, 2006), where the task is to predict the missing word in analogies such as A:B :: C: –?–. A well-known example involves predicting the vector queen from the vector combination king −man + woman, where linear operations on word vectors appear to capture the lexical relation governing the analogy, in this case OPPOSITE-GENDER. The results extend to several semantic relations such as CAPITAL-OF (paris−france+poland ≈warsaw) and morphosyntactic relations such as PLURALISATION (cars −car + apple ≈apples). Remarkably, since the model is not trained for this task, the relational structure of the vector space appears to be an emergent property. The key operation in these models is vector difference, or vector offset. For example, the paris − france vector appears to encode CAPITAL-OF, presumably by cancelling out the features of paris that are France-specific, and retaining the features that distinguish a capital city (Levy and Goldberg, 2014a). The success of the simple offset method on analogy completion suggests that the difference vectors (“DIFFVEC” hereafter) must themselves be meaningful: their direction and/or magnitude encodes a lexical relation. Previous analogy completion tasks used with word embeddings have limited coverage of lexical relation types. Moreover, the task does not explore the full implications of DIFFVECs as meaningful vector space objects in their own right, because it only looks for a one-best answer to the particular lexical analogies in the test set. In this paper, we introduce a new, larger dataset covering many well-known lexical relation types from the linguistics and cognitive science literature. We then apply DIFFVECs to two new tasks: unsupervised and supervised relation extraction. First, we cluster the DIFFVECs to test whether the clusters map onto true lexical relations. We find that the clustering 1671 works remarkably well, although syntactic relations are captured better than semantic ones. Second, we perform classification over the DIFFVECs and obtain remarkably high accuracy in a closed-world setting (over a predefined set of word pairs, each of which corresponds to a lexical relation in the training data). When we move to an open-world setting including random word pairs — many of which do not correspond to any lexical relation in the training data — the results are poor. We then investigate methods for better attuning the learned class representation to the lexical relations, focusing on methods for automatically synthesising negative instances. We find that this improves the model performance substantially. We also find that hyper-parameter optimised count-based methods are competitive with predictbased methods under both clustering and supervised relation classification, in line with the findings of Levy et al. (2015a). 2 Background and Related Work A lexical relation is a binary relation r holding between a word pair (wi, wj); for example, the pair (cart, wheel) stands in the WHOLE-PART relation. Relation learning in NLP includes relation extraction, relation classification, and relational similarity prediction. In relation extraction, related word pairs in a corpus and the relevant relation are identified. Given a word pair, the relation classification task involves assigning a word pair to the correct relation from a pre-defined set. In the Open Information Extraction paradigm (Banko et al., 2007; Weikum and Theobald, 2010), also known as unsupervised relation extraction, the relations themselves are also learned from the text (e.g. in the form of text labels). On the other hand, relational similarity prediction involves assessing the degree to which a word pair (A, B) stands in the same relation as another pair (C, D), or to complete an analogy A:B :: C: –?–. Relation learning is an important and long-standing task in NLP and has been the focus of a number of shared tasks (Girju et al., 2007; Hendrickx et al., 2010; Jurgens et al., 2012). Recently, attention has turned to using vector space models of words for relation classification and relational similarity prediction. Distributional word vectors have been used for detection of relations such as hypernymy (Geffet and Dagan, 2005; Kotlerman et al., 2010; Lenci and Benotto, 2012; Weeds et al., 2014; Rimell, 2014; Santus et al., 2014) and qualia structure (Yamada et al., 2009). An exciting development, and the inspiration for this paper, has been the demonstration that vector difference over word embeddings (Mikolov et al., 2013c) can be used to model word analogy tasks. This has given rise to a series of papers exploring the DIFFVEC idea in different contexts. The original analogy dataset has been used to evaluate predict-based language models by Mnih and Kavukcuoglu (2013) and also Zhila et al. (2013), who combine a neural language model with a pattern-based classifier. Kim and de Marneffe (2013) use word embeddings to derive representations of adjective scales, e.g. hot—warm—cool— cold. Fu et al. (2014) similarly use embeddings to predict hypernym relations, in this case clustering words by topic to show that hypernym DIFFVECs can be broken down into more fine-grained relations. Neural networks have also been developed for joint learning of lexical and relational similarity, making use of the WordNet relation hierarchy (Bordes et al., 2013; Socher et al., 2013; Xu et al., 2014; Yu and Dredze, 2014; Faruqui et al., 2015; Fried and Duh, 2015). Another strand of work responding to the vector difference approach has analysed the structure of predict-based embedding models in order to help explain their success on the analogy and other tasks (Levy and Goldberg, 2014a; Levy and Goldberg, 2014b; Arora et al., 2015). However, there has been no systematic investigation of the range of relations for which the vector difference method is most effective, although there have been some smallerscale investigations in this direction. Makrai et al. (2013) divide antonym pairs into semantic classes such as quality, time, gender, and distance, finding that for about two-thirds of antonym classes, DIFFVECs are significantly more correlated than random. Necs¸ulescu et al. (2015) train a classifier on word pairs, using word embeddings to predict coordinates, hypernyms, and meronyms. Roller and Erk (2016) analyse the performance of vector concatenation and difference on the task of predicting lexical entailment and show that vector concatenation overwhelmingly learns to detect Hearst patterns (e.g., including, such as). K¨oper et al. (2015) undertake a systematic study of morphosyntactic and semantic relations on word embeddings produced with word2vec (“w2v” hereafter; see §3.1) for English and German. They test a variety of relations including word similarity, antonyms, synonyms, hypernyms, and meronyms, in a novel analogy task. Although the set of relations tested by 1672 K¨oper et al. (2015) is somewhat more constrained than the set we use, there is a good deal of overlap. However, their evaluation is performed in the context of relational similarity, and they do not perform clustering or classification on the DIFFVECs. 3 General Approach and Resources We define the task of lexical relation learning to take a set of (ordered) word pairs {(wi, wj)} and a set of binary lexical relations R = {rk}, and map each word pair (wi, wj) as follows: (a) (wi, wj) 7→rk ∈R, i.e. the “closed-world” setting, where we assume that all word pairs can be uniquely classified according to a relation in R; or (b) (wi, wj) 7→rk ∈R ∪{φ} where φ signifies the fact that none of the relations in R apply to the word pair in question, i.e. the “open-world” setting. Our starting point for lexical relation learning is the assumption that important information about various types of relations is implicitly embedded in the offset vectors. While a range of methods have been proposed for composing word vectors (Baroni et al., 2012; Weeds et al., 2014; Roller et al., 2014), in this research we focus exclusively on DIFFVEC (i.e. w2 −w1). A second assumption is that there exist dimensions, or directions, in the embedding vector spaces responsible for a particular lexical relation. Such dimensions could be identified and exploited as part of a clustering or classification method, in the context of identifying relations between word pairs or classes of DIFFVECs. In order to test the generalisability of the DIFFVEC method, we require: (1) word embeddings, and (2) a set of lexical relations to evaluate against. As the focus of this paper is not the word embedding pre-training approaches so much as the utility of the DIFFVECs for lexical relation learning, we take a selection of four pre-trained word embeddings with strong currency in the literature, as detailed in §3.1. We also include the state-of-the-art count-based approach of Levy et al. (2015a), to test the generalisability of DIFFVECs to count-based word embeddings. For the lexical relations, we want a range of relations that is representative of the types of relational learning tasks targeted in the literature, and where there is availability of annotated data. To this end, we construct a dataset from a variety of sources, focusing on lexical semantic relations (which are less well represented in the analogy dataset of Mikolov et al. (2013c)), but also including morphosyntactic and morphosemantic relations (see §3.2). Name Dimensions Training data w2v 300 100 × 109 GloVe 200 6 × 109 SENNA 100 37 × 106 HLBL 200 37 × 106 w2vwiki 300 50 × 106 GloVewiki 300 50 × 106 SVDwiki 300 50 × 106 Table 1: The pre-trained word embeddings used in our experiments, with the number of dimensions and size of the training data (in word tokens). The models trained on English Wikipedia (“wiki”) are in the lower half of the table. 3.1 Word Embeddings We consider four highly successful word embedding models in our experiments: w2v (Mikolov et al., 2013a; Mikolov et al., 2013b), GloVe (Pennington et al., 2014), SENNA (Collobert and Weston, 2008), and HLBL (Mnih and Hinton, 2009), as detailed below. We also include SVD (Levy et al., 2015a), a count-based model which factorises a positive PMI (PPMI) matrix. For consistency of comparison, we train SVD as well as a version of w2v and GloVe (which we call w2vwiki and GloVewiki, respectively) on the English Wikipedia corpus (comparable in size to the training data of SENNA and HLBL), and apply the preprocessing of Levy et al. (2015a). We additionally normalise the w2vwiki and SVDwiki vectors to unit length; GloVewiki is natively normalised by column.1 w2v CBOW (Continuous Bag-Of-Words; Mikolov et al. (2013a)) predicts a word from its context using a model with the objective: J = 1 T T X i=1 log exp w⊤ i P j∈[−c,+c],j̸=0 ˜wi+j ! PV k=1 exp w⊤ k P j∈[−c,+c],j̸=0 ˜wi+j ! where wi and ˜wi are the vector representations for the ith word (as a focus or context word, respectively), V is the vocabulary size, T is the number of tokens in the corpus, and c is the context window size.2 Google News data was used 1We ran a series of experiments on normalised and unnormalised w2v models, and found that normalisation tends to boost results over most of our relations (with the exception of LEXSEMEvent and NOUNColl). We leave a more detailed investigation of normalisation to future work. 2In a slight abuse of notation, the subscripts of w do double 1673 Relation Description Pairs Source Example LEXSEMHyper hypernym 1173 SemEval’12 + BLESS (animal, dog) LEXSEMMero meronym 2825 SemEval’12 + BLESS (airplane, cockpit) LEXSEMAttr characteristic quality, action 71 SemEval’12 (cloud, rain) LEXSEMCause cause, purpose, or goal 249 SemEval’12 (cook, eat) LEXSEMSpace location or time association 235 SemEval’12 (aquarium, fish) LEXSEMRef expression or representation 187 SemEval’12 (song, emotion) LEXSEMEvent object’s action 3583 BLESS (zip, coat) NOUNSP plural form of a noun 100 MSR (year, years) VERB3 first to third person verb present-tense form 99 MSR (accept, accepts) VERBPast present-tense to past-tense verb form 100 MSR (know, knew) VERB3Past third person present-tense to past-tense verb form 100 MSR (creates, created) LVC light verb construction 58 Tan et al. (2006b) (give, approval) VERBNOUN nominalisation of a verb 3303 WordNet (approve, approval) PREFIX prefixing with re morpheme 118 Wiktionary (vote, revote) NOUNColl collective noun 257 Web source (army, ants) Table 2: Description of the 15 lexical relations. to train the model. We use the focus word vectors, W = {wk}V k=1, normalised such that each ∥wk∥= 1. The GloVe model (Pennington et al., 2014) is based on a similar bilinear formulation, framed as a low-rank decomposition of the matrix of corpus co-occurrence frequencies: J = 1 2 V X i,j=1 f(Pij)(w⊤ i ˜wj −log Pij)2 , where wi is a vector for the left context, wj is a vector for the right context, Pij is the relative frequency of word j in the context of word i, and f is a heuristic weighting function to balance the influence of high versus low term frequencies. The model was trained on English Wikipedia and the English Gigaword corpus version 5. The SVD model (Levy et al., 2015a) uses positive pointwise mutual information (PMI) matrix defined as: PPMI(w, c) = max(log ˆP(w, c) ˆP(w) ˆP(c) , 0) , where ˆP(w, c) is the joint probability of word w and context c, and ˆP(w) and ˆP(c) are their marginal probabilities. The matrix is factorised by singular value decomposition. HLBL (Mnih and Hinton, 2009) is a log-bilinear formulation of an n-gram language model, which predicts the ith word based on context words (i − n, . . . , i −2, i −1). This leads to the following training objective: J = 1 T T X i=1 exp(˜w⊤ i wi + bi) PV k=1 exp(˜w⊤ i wk + bk) , duty, denoting either the embedding for the ith token, wi, or kth word type, wk. where ˜wi = Pn−1 j=1 Cjwi−j is the context embedding, Cj is a scaling matrix, and b∗is a bias term. The final model, SENNA (Collobert and Weston, 2008), was initially proposed for multi-task training of several language processing tasks, from language modelling through to semantic role labelling. Here we focus on the statistical language modelling component, which has a pairwise ranking objective to maximise the relative score of each word in its local context: J = 1 T T X i=1 V X k=1 max  0, 1 −f(wi−c, . . . , wi−1, wi) + f(wi−c, . . . , wi−1, wk)  , where the last c −1 words are used as context, and f(x) is a non-linear function of the input, defined as a multi-layer perceptron. For HLBL and SENNA, we use the pre-trained embeddings from Turian et al. (2010), trained on the Reuters English newswire corpus. In both cases, the embeddings were scaled by the global standard deviation over the word-embedding matrix, Wscaled = 0.1 × W σ(W). For w2vwiki, GloVewiki and SVDwiki we used English Wikipedia. We followed the same preprocessing procedure described in Levy et al. (2015a),3 i.e., lower-cased all words and removed non-textual elements. During the training phase, for each model we set a word frequency threshold of 5. For the SVD model, we followed the recommendations of Levy et al. (2015a) in setting the context window size to 2, negative sampling parameter to 1, eigenvalue weighting to 0.5, and context distribution smoothing to 0.75; other parameters were assigned 3Although the w2v model trained without preprocessing performed marginally better, we used preprocessing throughout for consistency. 1674 their default values. For the other models we used the following parameter values: for w2v, context window = 8, negative samples = 25, hs = 0, sample = 1e-4, and iterations = 15; and for GloVe, context window = 15, x max = 10, and iterations = 15. 3.2 Lexical Relations In order to evaluate the applicability of the DIFFVEC approach to relations of different types, we assembled a set of lexical relations in three broad categories: lexical semantic relations, morphosyntactic paradigm relations, and morphosemantic relations. We constrained the relations to be binary and to have fixed directionality.4 Consequently we excluded symmetric lexical relations such as synonymy. We additionally constrained the dataset to the words occurring in all embedding sets. There is some overlap between our relations and those included in the analogy task of Mikolov et al. (2013c), but we include a much wider range of lexical semantic relations, especially those standardly evaluated in the relation classification literature. We manually filtered the data to remove duplicates (e.g., as part of merging the two sources of LEXSEMHyper intances), and normalise directionality. The final dataset consists of 12,458 triples ⟨relation, word1, word2⟩, comprising 15 relation types, extracted from SemEval’12 (Jurgens et al., 2012), BLESS (Baroni and Lenci, 2011), the MSR analogy dataset (Mikolov et al., 2013c), the light verb dataset of Tan et al. (2006a), Princeton WordNet (Fellbaum, 1998), Wiktionary,5 and a web lexicon of collective nouns,6 as listed in Table 2.7 4 Clustering Assuming DIFFVECs are capable of capturing all lexical relations equally, we would expect clustering to be able to identify sets of word pairs with high relational similarity, or equivalently clusters of similar offset vectors. Under the additional assumption that a given word pair corresponds to a unique lexical relation (in line with our definition of the lexical relation learning task in §3), a hard clustering approach is appropriate. In order to 4Word similarity is not included; it is not easily captured by DIFFVEC since there is no homogeneous “content” to the lexical relation which could be captured by the direction and magnitude of a difference vector (other than that it should be small). 5http://en.wiktionary.org 6http://www.rinkworks.com/words/collective. shtml 7The dataset is available at http://github.com/ivri/ DiffVec LEXSEMAttr LEXSEMCause NOUNColl LEXSEMEvent LEXSEMHyper LVC LEXSEMMero NOUNSP PREFIX LEXSEMRef LEXSEMSpace VERB3 VERB3Past VERBPast VERBNOUN Figure 1: t-SNE projection (Van der Maaten and Hinton, 2008) of DIFFVECs for 10 sample word pairs of each relation type, based on w2v. The intersection of the two axes identify the projection of the zero vector. Best viewed in colour. test these assumptions, we cluster our 15-relation closed-world dataset in the first instance, and evaluate against the lexical resources in §3.2. As further motivation, we projected the DIFFVEC space for a small number of samples of each class using t-SNE (Van der Maaten and Hinton, 2008), and found that many of the morphosyntactic relations (VERB3, VERBPast, VERB3Past, NOUNSP) form tight clusters (Figure 1). We cluster the DIFFVECs between all word pairs in our dataset using spectral clustering (Von Luxburg, 2007). Spectral clustering has two hyperparameters: the number of clusters, and the pairwise similarity measure for comparing DIFFVECs. We tune the hyperparameters over development data, in the form of 15% of the data obtained by random sampling, selecting the configuration that maximises the V-Measure (Rosenberg and Hirschberg, 2007). Figure 2 presents V-Measure values over the test data for each of the four word embedding models. We show results for different numbers of clusters, from N = 10 in steps of 10, up to N = 80 (beyond which the clustering quality diminishes).8 Observe that w2v achieves the best results, with a V-Measure value of around 0.36,9 which is relatively constant over varying numbers of clusters. GloVe and SVD mirror this result, but are consistently below w2v at a V-Measure of around 0.31. HLBL and SENNA performed very 8Although 80 clusters ≫our 15 relation types, the SemEval’12 classes each contain numerous subclasses, so the larger number may be more realistic. 9V-Measure returns a value in the range [0, 1], with 1 indicating perfect homogeneity and completeness. 1675 10 20 30 40 50 60 70 80 0.15 0.20 0.25 0.30 0.35 0.40 Number of clusters V-Measure w2v w2v wiki GloVe GloVewiki SVDwiki HLBL SENNA Figure 2: Spectral clustering results, comparing cluster quality (V-Measure) and the number of clusters. DIFFVECs are clustered and compared to the known relation types. Each line shows a different source of word embeddings. w2v GloVe HLBL SENNA LEXSEMAttr 0.49 0.54 0.62 0.63 LEXSEMCause 0.47 0.53 0.56 0.57 LEXSEMSpace 0.49 0.55 0.54 0.58 LEXSEMRef 0.44 0.50 0.54 0.56 LEXSEMHyper 0.44 0.50 0.43 0.45 LEXSEMEvent 0.46 0.47 0.47 0.48 LEXSEMMero 0.40 0.42 0.42 0.43 NOUNSP 0.07 0.14 0.22 0.29 VERB3 0.05 0.06 0.49 0.44 VERBPast 0.09 0.14 0.38 0.35 VERB3Past 0.07 0.05 0.49 0.52 LVC 0.28 0.55 0.32 0.30 VERBNOUN 0.31 0.33 0.35 0.36 PREFIX 0.32 0.30 0.55 0.58 NOUNColl 0.21 0.27 0.46 0.44 Table 3: The entropy for each lexical relation over the clustering output for each set of pre-trained word embeddings. similarly, at a substantially lower V-Measure than w2v or GloVe, closer to 0.21. As a crude calibration for these results, over the related clustering task of word sense induction, the best-performing systems in SemEval-2010 Task 4 (Manandhar et al., 2010) achieved a V-Measure of under 0.2. The lower V-measure for w2vwiki and GloVewiki (as compared to w2v and GloVe, respectively) indicates that the volume of training data plays a role in the clustering results. However, both methods still perform well above SENNA and HLBL, and w2v has a clear empirical advantage over GloVe. We note that SVDwiki performs almost as well as w2vwiki, consistent with the results of Levy et al. (2015a). We additionally calculated the entropy for each lexical relation, based on the distribution of instances belonging to a given relation across the different clusters (and simple MLE). For each embedding method, we present the entropy for the cluster size where V-measure was maximised over the development data. Since the samples are distributed nonuniformly, we normalise entropy results for each method by log(n) where n is the number of samples in a particular relation. The results are in Table 3, with the lowest entropy (purest clustering) for each relation indicated in bold. Looking across the different lexical relation types, the morphosyntactic paradigm relations (NOUNSP and the three VERB relations) are by far the easiest to capture. The lexical semantic relations, on the other hand, are the hardest to capture for all embeddings. Considering w2v embeddings, for VERB3 there was a single cluster consisting of around 90% of VERB3 word pairs. Most errors resulted from POS ambiguity, leading to confusion with VERBNOUN in particular. Example VERB3 pairs incorrectly clustered are: (study, studies), (run, runs), and (like, likes). This polysemy results in the distance represented in the DIFFVEC for such pairs being above average for VERB3, and consequently clustered with other cross-POS relations. For VERBPast, a single relatively pure cluster was generated, with minor contamination due to pairs such as (hurt, saw), (utensil, saw), and (wipe, saw). Here, the noun saw is ambiguous with a high-frequency past-tense verb; hurt and wipe also have ambigous POS. A related phenomenon was observed for NOUNColl, where the instances were assigned to a large mixed cluster containing word pairs where the second word referred to an animal, reflecting the fact that most of the collective nouns in our dataset relate to animals, e.g. (stand, horse), (ambush, tigers), (antibiotics, bacteria). This is interesting from a DIFFVEC point of view, since it shows that the lexical semantics of one word in the pair can overwhelm the semantic content of the DIFFVEC (something that we return to investigate in §5.4). LEXSEMMero was also split into multiple clusters along topical lines, with separate clusters for weapons, dwellings, vehicles, etc. Given the encouraging results from our clustering experiment, we next evaluate DIFFVECs in a supervised relation classification setting. 1676 5 Classification A natural question is whether we can accurately characterise lexical relations through supervised learning over the DIFFVECs. For these experiments we use the w2v, w2vwiki, and SVDwiki embeddings exclusively (based on their superior performance in the clustering experiment), and a subset of the relations which is both representative of the breadth of the full relation set, and for which we have sufficient data for supervised training and evaluation, namely: NOUNColl, LEXSEMEvent, LEXSEMHyper, LEXSEMMero, NOUNSP, PREFIX, VERB3, VERB3Past, and VERBPast (see Table 2). We consider two applications: (1) a CLOSEDWORLD setting similar to the unsupervised evaluation, in which the classifier only encounters word pairs which correspond to one of the nine relations; and (2) a more challenging OPEN-WORLD setting where random word pairs — which may or may not correspond to one of our relations — are included in the evaluation. For both settings, we further investigate whether there is a lexical memorisation effect for a broad range of relation types of the sort identified by Weeds et al. (2014) and Levy et al. (2015b) for hypernyms, by experimenting with disjoint training and test vocabulary. 5.1 CLOSED-WORLD Classification For the CLOSED-WORLD setting, we train and test a multiclass classifier on datasets comprising ⟨DIFFVEC, r⟩pairs, where r is one of our nine relation types, and DIFFVEC is based on one of w2v, w2vwiki and SVD. As a baseline, we cluster the data as described in §4, running the clusterer several times over the 9-relation data to select the optimal V-Measure value based on the development data, resulting in 50 clusters. We label each cluster with the majority class based on the training instances, and evaluate the resultant labelling for the test instances. We use an SVM with a linear kernel, and report results from 10-fold cross-validation in Table 4. The SVM achieves a higher F-score than the baseline on almost every relation, particularly on LEXSEMHyper, and the lower-frequency NOUNSP, NOUNColl, and PREFIX. Most of the relations — even the most difficult ones from our clustering experiment — are classified with very high Fscore. That is, with a simple linear transformation of the embedding dimensions, we are able to achieve near-perfect results. The PREFIX relation achieved markedly lower recall, resulting in a lower Relation Baseline w2v w2vwiki SVDwiki LEXSEMHyper 0.60 0.93 0.91 0.91 LEXSEMMero 0.90 0.97 0.96 0.96 LEXSEMEvent 0.87 0.98 0.97 0.97 NOUNSP 0.00 0.83 0.78 0.74 VERB3 0.99 0.98 0.96 0.97 VERBPast 0.78 0.98 0.98 0.95 VERB3Past 0.99 0.98 0.98 0.96 PREFIX 0.00 0.82 0.34 0.60 NOUNColl 0.19 0.95 0.91 0.92 Micro-average 0.84 0.97 0.95 0.95 Table 4: F-scores (F) for CLOSED-WORLD classification, for a baseline method based on clustering + majority-class labelling, a multiclass linear SVM trained on w2v, w2vwiki and SVDwiki DIFFVEC inputs. F-score, due to large differences in the predominant usages associated with the respective words (e.g., (union, reunion), where the vector for union is heavily biased by contexts associated with trade unions, but reunion is heavily biased by contexts relating to social get-togethers; and (entry, reentry), where entry is associated with competitions and entrance to schools, while reentry is associated with space travel). Somewhat surprisingly, given the small dimensionality of the input (vectors of size 300 for all three methods), we found that the linear SVM slightly outperformed a non-linear SVM using an RBF kernel. We observe no real difference between w2vwiki and SVDwiki, supporting the hypothesis of Levy et al. (2015a) that under appropriate parameter settings, count-based methods achieve high results. The impact of the training data volume for pre-training of the embeddings is also less pronounced than in the case of our clustering experiment. 5.2 OPEN-WORLD Classification We now turn to a more challenging evaluation setting: a test set including word pairs drawn at random. This setting aims to illustrate whether a DIFFVEC-based classifier is capable of differentiating related word pairs from noise, and can be applied to open data to learn new related word pairs.10 For these experiments, we train a binary classifier for each relation type, using 2 3 of our relation data for training and 1 3 for testing. The test data is augmented with an equal quantity of random pairs, generated as follows: (1) sample a seed lexicon by drawing words proportional to their frequency in Wikipedia;11 10Hereafter we provide results for w2v only, as we found that SVD achieved similar results. 11Filtered to consist of words for which we have embed1677 Relation Orig +neg P R F P R F LEXSEMHyper 0.95 0.92 0.93 0.99 0.84 0.91 LEXSEMMero 0.13 0.96 0.24 0.95 0.84 0.89 LEXSEMEvent 0.44 0.98 0.61 0.93 0.90 0.91 NOUNSP 0.95 0.68 0.8 1.00 0.68 0.81 VERB3 0.75 1.00 0.86 0.93 0.93 0.93 VERBPast 0.94 0.86 0.90 0.97 0.84 0.90 VERB3Past 0.76 0.95 0.84 0.87 0.93 0.90 PREFIX 1.00 0.29 0.44 1.00 0.13 0.23 NOUNColl 0.43 0.74 0.55 0.97 0.41 0.57 Table 5: Precision (P) and recall (R) for OPENWORLD classification, using the binary classifier without (“Orig”) and with (“+neg”) negative samples . (2) take the Cartesian product over pairs of words from the seed lexicon; (3) sample word pairs uniformly from this set. This procedure generates word pairs that are representative of the frequency profile of our corpus. We train 9 binary RBF-kernel SVM classifiers on the training partition, and evaluate on our randomly augmented test set. Fully annotating our random word pairs is prohibitively expensive, so instead, we manually annotated only the word pairs which were positively classified by one of our models. The results of our experiments are presented in the left half of Table 5, in which we report on results over the combination of the original test data from §5.1 and the random word pairs, noting that recall (R) for OPEN-WORLD takes the form of relative recall (Pantel et al., 2004) over the positively-classified word pairs. The results are much lower than for the closed-word setting (Table 4), most notably in terms of precision (P). For instance, the random pairs (have, works), (turn, took), and (works, started) were incorrectly classified as VERB3, VERBPast and VERB3Past, respectively. That is, the model captures syntax, but lacks the ability to capture lexical paradigms, and tends to overgenerate. 5.3 OPEN-WORLD Training with Negative Sampling To address the problem of incorrectly classifying random word pairs as valid relations, we retrain the classifier on a dataset comprising both valid and automatically-generated negative distractor samples. The basic intuition behind this approach is to construct samples which will force the model to learn decision boundaries that more tightly capture the true scope of a given relation. To this end, we automatically generated two types of negative dings. distractors: opposite pairs: generated by switching the order of word pairs, Opposw1,w2 = word1 − word2. This ensures the classifier adequately captures the asymmetry in the relations. shuffled pairs: generated by replacing w2 with a random word w′ 2 from the same relation, Shuffw1,w2 = word′ 2 −word1. This is targeted at relations that take specific word classes in particular positions, e.g., (VB, VBD) word pairs, so that the model learns to encode the relation rather than simply learning the properties of the word classes. Both types of distractors are added to the training set, such that there are equal numbers of valid relations, opposite pairs and shuffled pairs. After training our classifier, we evaluate its predictions in the same way as in §5.2, using the same test set combining related and random word pairs.12 The results are shown in the right half of Table 5 (as “+neg”). Observe that the precision is much higher and recall somewhat lower compared to the classifier trained with only positive samples. This follows from the adversarial training scenario: using negative distractors results in a more conservative classifier, that correctly classifies the vast majority of the random word pairs as not corresponding to a given relation, resulting in higher precision at the expense of a small drop in recall. Overall this leads to higher F-scores, as shown in Figure 3, other than for hypernyms (LEXSEMHyper) and prefixes (PREFIX). For example, the standard classifier for NOUNColl learned to match word pairs including an animal name (e.g., (plague, rats)), while training with negative samples resulted in much more conservative predictions and consequently much lower recall. The classifier was able to capture (herd, horses) but not (run, salmon), (party, jays) or (singular, boar) as instances of NOUNColl, possibly because of polysemy. The most striking difference in performance was for LEXSEMMero, where the standard classifier generated many false positive noun pairs (e.g. (series, radio)), but the false positive rate was considerably reduced with negative sampling. 5.4 Lexical Memorisation Weeds et al. (2014) and Levy et al. (2015b) recently showed that supervised methods using DIFFVECs achieve artificially high results as a result of “lexical memorisation” over frequent words asso12But noting that relative recall for the random word pairs is based on the pool of positive predictions from both models. 1678 Usage of Negative Samples 0.0 0.2 0.4 0.6 0.8 1.0 LexSemHyper LexSemMero LexSemEvent NounSP Verb3 VerbPast Verb3Past Prefix NounColl No Negative Samples With Negative Samples Figure 3: F-score for OPEN-WORLD classification, comparing models trained with and without negative samples. 0 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 Volume of random word pairs P / R / F P P+neg R R+neg F F+neg Figure 4: Evaluation of the OPEN-WORLD model when trained on split vocabulary, for varying numbers of random word pairs in the test dataset (expressed as a multiplier relative to the number of CLOSED-WORLD test instances). ciated with the hypernym relation. For example, (animal, cat), (animal, dog), and (animal, pig) all share the superclass animal, and the model thus learns to classify as positive any word pair with animal as the first word. To address this effect, we follow Levy et al. (2015b) in splitting our vocabulary into training and test partitions, to ensure there is no overlap between training and test vocabulary. We then train classifiers with and without negative sampling (§5.3), incrementally adding the random word pairs from §5.2 to the test data (from no random word pairs to five times the original size of the test data) to investigate the interaction of negative sampling with greater diversity in the test set when there is a split vocabulary. The results are shown in Figure 4. Observe that the precision for the standard classifier decreases rapidly as more random word pairs are added to the test data. In comparison, the precision when negative sampling is used shows only a small drop-off, indicating that negative sampling is effective at maintaining precision in an OPENWORLD setting even when the training and test vocabulary are disjoint. This benefit comes at the expense of recall, which is much lower when negative sampling is used (note that recall stays relatively constant as random word pairs are added, as the vast majority of them do not correspond to any relation). At the maximum level of random word pairs in the test data, the F-score for the negative sampling classifier is higher than for the standard classifier. 6 Conclusions This paper is the first to test the generalisability of the vector difference approach across a broad range of lexical relations (in raw number and also variety). Using clustering we showed that many types of morphosyntactic and morphosemantic differences are captured by DIFFVECs, but that lexical semantic relations are captured less well, a finding which is consistent with previous work (K¨oper et al., 2015). In contrast, classification over the DIFFVECs works extremely well in a closed-world setting, showing that dimensions of DIFFVECs encode lexical relations. Classification performs less well over open data, although with the introduction of automatically-generated negative samples, the results improve substantially. Negative sampling also improves classification when the training and test vocabulary are split to minimise lexical memorisation. Overall, we conclude that the DIFFVEC approach has impressive utility over a broad range of lexical relations, especially under supervised classification. Acknowledgments LR was supported by EPSRC grant EP/I037512/1 and ERC Starting Grant DisCoTex (306920). TC and TB were supported by the Australian Research Council. References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2015. Random walks on context spaces: Towards an explanation of the mysteries of semantic word embeddings. arXiv:1502.03520 [cs.LG]. 1679 Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction for the web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-2007), pages 2670– 2676, Hyderabad, India. Marco Baroni and Alessandro Lenci. 2011. How we BLESSed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, GEMS ’11, pages 1–10, Edinburgh, Scotland. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the EACL (EACL 2012), pages 23–32, Avignon, France. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 25 (NIPS-13), pages 2787– 2795. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning (ICML 2008), pages 160–167, Helsinki, Finland. Manaal Faruqui, Jesse Dodge, Sujay Jauhar, Chris Dyer, Ed Hovy, and Noah Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics — Human Language Technologies (NAACL HLT 2015), pages 1351–1356, Denver, USA. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, USA. Daniel Fried and Kevin Duh. 2015. Incorporating both distributional and relational semantics in word representations. In Proceedings of the Third International Conference on Learning Representations (ICLR 2015), San Diego, USA. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pages 1199–1209, Baltimore, USA. Maayan Geffet and Ido Dagan. 2005. The distributional inclusion hypotheses and lexical entailment. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), pages 107–114, Ann Arbor, USA. Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, and Deniz Yuret. 2007. SemEval-2007 Task 4: Classification of semantic relations between nominals. In Proceedings of the 4th International Workshop on Semantic Evaluation (SemEval 2007), pages 13–18, Prague, Czech Republic. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 Task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval 2010), pages 33–38, Uppsala, Sweden. David Jurgens, Saif Mohammad, Peter Turney, and Keith Holyoak. 2012. SemEval-2012 Task 2: Measuring degrees of relational similarity. In Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval 2012), pages 356–364, Montr´eal, Canada. Joo-Kyung Kim and Marie-Catherine de Marneffe. 2013. Deriving adjectival scales from continuous space word representations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), pages 1625– 1630, Seattle, USA. Maximilian K¨oper, Christian Scheible, and Sabine Schulte im Walde. 2015. Multilingual reliability and “semantic” structure of continuous word spaces. In Proceedings of the Eleventh International Workshop on Computational Semantics (IWCS-11), pages 40–45, London, UK. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16:359–389. Alessandro Lenci and Giulia Benotto. 2012. Identifying hypernyms in distributional semantic spaces. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012), pages 75–79, Montr´eal, Canada. Omer Levy and Yoav Goldberg. 2014a. Linguistic regularities in sparse and explicit word representations. In Proceedings of the 18th Conference on Natural Language Learning (CoNLL-2014), pages 171–180, Baltimore, USA. Omer Levy and Yoav Goldberg. 2014b. Neural word embeddings as implicit matrix factorization. In Advances in Neural Information Processing Systems 26 (NIPS-14). Omer Levy, Yoav Goldberg, and Ido Dagan. 2015a. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211– 225. 1680 Omer Levy, Steffen Remus, Chris Biemann, Ido Dagan, and Israel Ramat-Gan. 2015b. Do supervised distributional methods really learn lexical inference relations? In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics — Human Language Technologies (NAACL HLT 2015), pages 970–976, Denver, USA. M´arton Makrai, D´avid Nemeskey, and Andr´as Kornai. 2013. Applicative structure in vector space models. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pages 59–63, Sofia, Bulgaria. Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 Task 14: Word sense induction & disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63–68, Uppsala, Sweden. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of the Workshop of the First International Conference on Learning Representations (ICLR 2013), Scottsdale, USA. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 25 (NIPS-13). Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2013), pages 746–751, Atlanta, USA. Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Advances in Neural Information Processing Systems 21 (NIPS-09), pages 1081–1088. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in Neural Information Processing Systems 25 (NIPS-13). Silvia Necs¸ulescu, Sara Mendes, David Jurgens, N´uria Bel, and Roberto Navigli. 2015. Reading between the lines: Overcoming data sparsity for accurate classification of lexical relationships. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics (*SEM 2015), pages 182–192, Denver, USA. Patrick Pantel, Deepak Ravichandran, and Eduard Hovy. 2004. Towards terascale semantic acquisition. In Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004), pages 771–777, Geneva, Switzerland. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1532– 1543, Doha, Qatar. Laura Rimell. 2014. Distributional lexical entailment by topic coherence. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pages 511–519, Gothenburg, Sweden. Stephen Roller and Katrin Erk. 2016. Relations such as hypernymy: Identifying and exploiting Hearst patterns in distributional vectors for lexical entailment. arXiv preprint arXiv:1605.05433. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pages 1025–1036, Dublin, Ireland. Andrew Rosenberg and Julia Hirschberg. 2007. VMeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning 2007 (EMNLP-CoNLL 2007), pages 410–420, Prague, Czech Republic. Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pages 38–42, Gothenburg, Sweden. Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems 25 (NIPS-13). Pang-Ning Tan, Michael Steinbach, and Vipin Kumar. 2006a. Introduction to Data Mining. Addison Wesley. Yee Fan Tan, Min-Yen Kan, and Hang Cui. 2006b. Extending corpus-based identification of light verb constructions using a supervised learning framework. In Proceedings of the EACL 2006 Workshop on Multiword-expressions in a Multilingual Context, pages 49–56, Trento, Italy. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the ACL (ACL 2010), pages 384–394, Uppsala, Sweden. Peter D. Turney. 2006. Similarity of semantic relations. Computational Linguistics, 32(3):379–416. 1681 Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(2579-2605):85. Ulrike Von Luxburg. 2007. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416. Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pages 2249–2259, Dublin, Ireland. Gerhard Weikum and Martin Theobald. 2010. From information to knowledge: harvesting entities and relationships from web sources. In Proceedings of the Twenty Ninth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pages 65–76, Indianapolis, USA. Chang Xu, Yanlong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. RCNET: A general framework for incorporating knowledge into word representations. In Proceedings of the 23rd ACM Conference on Information and Knowledge Management (CIKM 2014), pages 1219– 1228, Shanghai, China. Ichiro Yamada, Kentaro Torisawa, Jun’ichi Kazama, Kow Kuroda, Masaki Murata, Stijn De Saeger, Francis Bond, and Asuka Sumida. 2009. Hypernym discovery based on distributional similarity and hierarchical structures. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP 2009), pages 929–937, Singapore. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pages 545– 550, Baltimore, USA. A. Zhila, W.T. Yih, C. Meek, G. Zweig, and T. Mikolov. 2013. Combining heterogeneous models for measuring relational similarity. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2013). 1682
2016
158
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1683–1692, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Minimum Risk Training for Neural Machine Translation Shiqi Shen†, Yong Cheng#, Zhongjun He+, Wei He+, Hua Wu+, Maosong Sun†, Yang Liu†∗ †State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China #Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China +Baidu Inc., Beijing, China {vicapple22, chengyong3001}@gmail.com, {hezhongjun, hewei06, wu hua}@baidu.com, {sms, liuyang2011}@tsinghua.edu.cn Abstract We propose minimum risk training for end-to-end neural machine translation. Unlike conventional maximum likelihood estimation, minimum risk training is capable of optimizing model parameters directly with respect to arbitrary evaluation metrics, which are not necessarily differentiable. Experiments show that our approach achieves significant improvements over maximum likelihood estimation on a state-of-the-art neural machine translation system across various languages pairs. Transparent to architectures, our approach can be applied to more neural networks and potentially benefit more NLP tasks. 1 Introduction Recently, end-to-end neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) has attracted increasing attention from the community. Providing a new paradigm for machine translation, NMT aims at training a single, large neural network that directly transforms a sourcelanguage sentence to a target-language sentence without explicitly modeling latent structures (e.g., word alignment, phrase segmentation, phrase reordering, and SCFG derivation) that are vital in conventional statistical machine translation (SMT) (Brown et al., 1993; Koehn et al., 2003; Chiang, 2005). Current NMT models are based on the encoderdecoder framework (Cho et al., 2014; Sutskever et al., 2014), with an encoder to read and encode a source-language sentence into a vector, from which a decoder generates a target-language sentence. While early efforts encode the input into a ∗Corresponding author: Yang Liu. fixed-length vector, Bahdanau et al. (2015) advocate the attention mechanism to dynamically generate a context vector for a target word being generated. Although NMT models have achieved results on par with or better than conventional SMT, they still suffer from a major drawback: the models are optimized to maximize the likelihood of training data instead of evaluation metrics that actually quantify translation quality. Ranzato et al. (2015) indicate two drawbacks of maximum likelihood estimation (MLE) for NMT. First, the models are only exposed to the training distribution instead of model predictions. Second, the loss function is defined at the word level instead of the sentence level. In this work, we introduce minimum risk training (MRT) for neural machine translation. The new training objective is to minimize the expected loss (i.e., risk) on the training data. MRT has the following advantages over MLE: 1. Direct optimization with respect to evaluation metrics: MRT introduces evaluation metrics as loss functions and aims to minimize expected loss on the training data. 2. Applicable to arbitrary loss functions: our approach allows arbitrary sentence-level loss functions, which are not necessarily differentiable. 3. Transparent to architectures: MRT does not assume the specific architectures of NMT and can be applied to any end-to-end NMT systems. While MRT has been widely used in conventional SMT (Och, 2003; Smith and Eisner, 2006; He and Deng, 2012) and deep learning based MT (Gao et al., 2014), to the best of our knowledge, this work is the first effort to introduce MRT 1683 into end-to-end NMT. Experiments on a variety of language pairs (Chinese-English, English-French, and English-German) show that MRT leads to significant improvements over MLE on a state-ofthe-art NMT system (Bahdanau et al., 2015). 2 Background Given a source sentence x = x1, . . . , xm, . . . , xM and a target sentence y = y1, . . . , yn, . . . , yN, end-to-end NMT directly models the translation probability: P(y|x; θ) = N Y n=1 P(yn|x, y<n; θ), (1) where θ is a set of model parameters and y<n = y1, . . . , yn−1 is a partial translation. Predicting the n-th target word can be modeled by using a recurrent neural network: P(yn|x, y<n; θ) ∝exp n q(yn−1, zn, cn, θ) o , (2) where zn is the n-th hidden state on the target side, cn is the context for generating the n-th target word, and q(·) is a non-linear function. Current NMT approaches differ in calculating zn and cn and defining q(·). Please refer to (Sutskever et al., 2014; Bahdanau et al., 2015) for more details. Given a set of training examples D = {⟨x(s), y(s)⟩}S s=1, the standard training objective is to maximize the log-likelihood of the training data: ˆθMLE = argmax θ n L(θ) o , (3) where L(θ) = S X s=1 log P(y(s)|x(s); θ) (4) = S X s=1 N(s) X n=1 log P(y(s) n |x(s), y(s) <n; θ). (5) We use N(s) to denote the length of the s-th target sentence y(s). The partial derivative with respect to a model parameter θi is calculated as ∂L(θ) ∂θi = S X s=1 N(s) X n=1 ∂P(y(s) n |x(s), y(s) <n; θ)/∂θi P(y(s) n |x(s), y(s) <n; θ) . (6) Ranzato et al. (2015) point out that MLE for end-to-end NMT suffers from two drawbacks. First, while the models are trained only on the training data distribution, they are used to generate target words on previous model predictions, which can be erroneous, at test time. This is referred to as exposure bias (Ranzato et al., 2015). Second, MLE usually uses the cross-entropy loss focusing on word-level errors to maximize the probability of the next correct word, which might hardly correlate well with corpus-level and sentence-level evaluation metrics such as BLEU (Papineni et al., 2002) and TER (Snover et al., 2006). As a result, it is important to introduce new training algorithms for end-to-end NMT to include model predictions during training and optimize model parameters directly with respect to evaluation metrics. 3 Minimum Risk Training for Neural Machine Translation Minimum risk training (MRT), which aims to minimize the expected loss on the training data, has been widely used in conventional SMT (Och, 2003; Smith and Eisner, 2006; He and Deng, 2012) and deep learning based MT (Gao et al., 2014). The basic idea is to introduce evaluation metrics as loss functions and assume that the optimal set of model parameters should minimize the expected loss on the training data. Let ⟨x(s), y(s)⟩be the s-th sentence pair in the training data and y be a model prediction. We use a loss function ∆(y, y(s)) to measure the discrepancy between the model prediction y and the goldstandard translation y(s). Such a loss function can be negative smoothed sentence-level evaluation metrics such as BLEU (Papineni et al., 2002), NIST (Doddington, 2002), TER (Snover et al., 2006), or METEOR (Lavie and Denkowski, 2009) that have been widely used in machine translation evaluation. Note that a loss function is not parameterized and thus not differentiable. In MRT, the risk is defined as the expected loss with respect to the posterior distribution: R(θ) = S X s=1 Ey|x(s);θ h ∆(y, y(s)) i (7) = S X s=1 X y∈Y(x(s)) P(y|x(s); θ)∆(y, y(s)), (8) where Y(x(s)) is a set of all possible candidate translations for x(s). 1684 ∆(y, y(s)) P(y|x(s); θ) y1 −1.0 0.2 0.3 0.5 0.7 y2 −0.3 0.5 0.2 0.2 0.1 y3 −0.5 0.3 0.5 0.3 0.2 Ey|x(s);θ[∆(y, y(s))] −0.50 −0.61 −0.71 −0.83 Table 1: Example of minimum risk training. x(s) is an observed source sentence, y(s) is its corresponding gold-standard translation, and y1, y2, and y3 are model predictions. For simplicity, we suppose that the full search space contains only three candidates. The loss function ∆(y, y(s)) measures the difference between model prediction and gold-standard. The goal of MRT is to find a distribution (the last column) that correlates well with the gold-standard by minimizing the expected loss. The training objective of MRT is to minimize the risk on the training data: ˆθMRT = argmin θ n R(θ) o . (9) Intuitively, while MLE aims to maximize the likelihood of training data, our training objective is to discriminate between candidates. For example, in Table 1, suppose the candidate set Y(x(s)) contains only three candidates: y1, y2, and y3. According to the losses calculated by comparing with the gold-standard translation y(s), it is clear that y1 is the best candidate, y3 is the second best, and y2 is the worst: y1 > y3 > y2. The right half of Table 1 shows four models. As model 1 (column 3) ranks the candidates in a reverse order as compared with the gold-standard (i.e., y2 > y3 > y1), it obtains the highest risk of −0.50. Achieving a better correlation with the gold-standard than model 1 by predicting y3 > y1 > y2, model 2 (column 4) reduces the risk to −0.61. As model 3 (column 5) ranks the candidates in the same order with the gold-standard, the risk goes down to −0.71. The risk can be further reduced by concentrating the probability mass on y1 (column 6). As a result, by minimizing the risk on the training data, we expect to obtain a model that correlates well with the gold-standard. In MRT, the partial derivative with respect to a model parameter θi is given by ∂R(θ) ∂θi = S X s=1 Ey|x(s);θ " ∆(y, y(s)) × N(s) X n=1 ∂P(yn|x(s), y<n; θ)/∂θi P(yn|x(s), y<n; θ) # . (10) Since Eq. (10) suggests there is no need to differentiate ∆(y, y(s)), MRT allows arbitrary nondifferentiable loss functions. In addition, our approach is transparent to architectures and can be applied to arbitrary end-to-end NMT models. Despite these advantages, MRT faces a major challenge: the expectations in Eq. (10) are usually intractable to calculate due to the exponential search space of Y(x(s)), the non-decomposability of the loss function ∆(y, y(s)), and the context sensitiveness of NMT. To alleviate this problem, we propose to only use a subset of the full search space to approximate the posterior distribution and introduce a new training objective: ˜R(θ) = S X s=1 Ey|x(s);θ,α h ∆(y, y(s)) i (11) = S X s=1 X y∈S(x(s)) Q(y|x(s); θ, α)∆(y, y(s)), (12) where S(x(s)) ⊂Y(x(s)) is a sampled subset of the full search space, and Q(y|x(s); θ, α) is a distribution defined on the subspace S(x(s)): Q(y|x(s); θ, α) = P(y|x(s); θ)α P y′∈S(x(s)) P(y′|x(s); θ)α . (13) Note that α is a hyper-parameter that controls the sharpness of the Q distribution (Och, 2003). Algorithm 1 shows how to build S(x(s)) by sampling the full search space. The sampled subset initializes with the gold-standard translation (line 1). Then, the algorithm keeps sampling a target word given the source sentence and the partial translation until reaching the end of sentence (lines 3-16). Note that sampling might produce duplicate candidates, which are removed when building 1685 Input: the s-th source sentence in the training data x(s), the s-th target sentence in the training data y(s), the set of model parameters θ, the limit on the length of a candidate translation l, and the limit on the size of sampled space k. Output: sampled space S(x(s)). 1 S(x(s)) ←{y(s)}; // the gold-standard translation is included 2 i ←1; 3 while i ≤k do 4 y ←∅; // an empty candidate translation 5 n ←1; 6 while n ≤l do 7 y ∼P(yn|x(s), y<n; θ); // sample the n-th target word 8 y ←y ∪{y}; 9 if y = EOS then 10 break; // terminate if reach the end of sentence 11 end 12 n ←n + 1; 13 end 14 S(x(s)) ←S(x(s)) ∪{y}; 15 i ←i + 1; 16 end Algorithm 1: Sampling the full search space. the subspace. We find that it is inefficient to force the algorithm to generate exactly k distinct candidates because high-probability candidates can be sampled repeatedly, especially when the probability mass highly concentrates on a few candidates. In practice, we take advantage of GPU’s parallel architectures to speed up the sampling. 1 Given the sampled space, the partial derivative with respect to a model parameter θi of ˜R(θ) is given by ∂˜R(θ) ∂θi = α S X s=1 Ey|x(s);θ,α " ∂P(y|x(s); θ)/∂θi P(y|x(s); θ) ×  ∆(y, y(s)) − Ey′|x(s);θ,α[∆(y′, y(s))] # . (14) Since |S(x(s))| ≪|Y(x(s))|, the expectations in Eq. (14) can be efficiently calculated by explicitly enumerating all candidates in S(x(s)). In our experiments, we find that approximating the full space with 100 samples (i.e., k = 100) works very well and further increasing sample size does not lead to significant improvements (see Section 4.3). 1To build the subset, an alternative to sampling is computing top-k translations. We prefer sampling to computing top-k translations for efficiency: sampling is more efficient and easy-to-implement than calculating k-best lists, especially given the extremely parallel architectures of GPUs. 4 Experiments 4.1 Setup We evaluated our approach on three translation tasks: Chinese-English, English-French, and English-German. The evaluation metric is BLEU (Papineni et al., 2002) as calculated by the multi-bleu.perl script. For Chinese-English, the training data consists of 2.56M pairs of sentences with 67.5M Chinese words and 74.8M English words, respectively. We used the NIST 2006 dataset as the validation set (hyper-parameter optimization and model selection) and the NIST 2002, 2003, 2004, 2005, and 2008 datasets as test sets. For English-French, to compare with the results reported by previous work on end-to-end NMT (Sutskever et al., 2014; Bahdanau et al., 2015; Jean et al., 2015; Luong et al., 2015b), we used the same subset of the WMT 2014 training corpus that contains 12M sentence pairs with 304M English words and 348M French words. The concatenation of news-test 2012 and news-test 2013 serves as the validation set and news-test 2014 as the test set. For English-German, to compare with the results reported by previous work (Jean et al., 2015; Luong et al., 2015a), we used the same subset of the WMT 2014 training corpus that contains 4M sentence pairs with 91M English words and 87M German words. The concatenation of newstest 2012 and news-test 2013 is used as the validation set and news-test 2014 as the test set. 1686 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 BLEU score Training time (hours) α=5×10-3 α=1×10-4 α=1×10-1 Figure 1: Effect of α on the Chinese-English validation set. We compare our approach with two state-ofthe-art SMT and NMT systems: 1. MOSES (Koehn and Hoang, 2007): a phrasebased SMT system using minimum error rate training (Och, 2003). 2. RNNSEARCH (Bahdanau et al., 2015): an attention-based NMT system using maximum likelihood estimation. MOSES uses the parallel corpus to train a phrase-based translation model and the target part to train a 4-gram language model using the SRILM toolkit (Stolcke, 2002). 2 The log-linear model Moses uses is trained by the minimum error rate training (MERT) algorithm (Och, 2003) that directly optimizes model parameters with respect to evaluation metrics. RNNSEARCH uses the parallel corpus to train an attention-based neural translation model using the maximum likelihood criterion. On top of RNNSEARCH, our approach replaces MLE with MRT. We initialize our model with the RNNsearch50 model (Bahdanau et al., 2015). We set the vocabulary size to 30K for Chinese-English and English-French and 50K for English-German. The beam size for decoding is 10. The default loss function is negative smoothed sentence-level BLEU. 4.2 Effect of α The hyper-parameter α controls the smoothness of the Q distribution (see Eq. (13)). As shown in 2It is possible to exploit larger monolingual corpora for both MOSES and RNNSEARCH (Gulcehre et al., 2015; Sennrich et al., 2015). We leave this for future work. 0 5 10 15 20 25 30 35 40 0 10 20 30 40 50 60 BLEU score Training time (hours) k=100 k=50 k=25 Figure 2: Effect of sample size on the ChineseEnglish validation set. criterion loss BLEU TER NIST MLE N/A 30.48 60.85 8.26 −sBLEU 36.71 53.48 8.90 MRT sTER 30.14 53.83 6.02 −sNIST 32.32 56.85 8.90 Table 2: Effect of loss function on the ChineseEnglish validation set. Figure 1, we find that α has a critical effect on BLEU scores on the Chinese-English validation set. While α = 1 × 10−1 deceases BLEU scores dramatically, α = 5 × 10−3 improves translation quality significantly and consistently. Reducing α further to 1 × 10−4, however, results in lower BLEU scores. Therefore, we set α = 5 × 10−3 in the following experiments. 4.3 Effect of Sample Size For efficiency, we sample k candidate translations from the full search space Y(x(s)) to build an approximate posterior distribution Q (Section 3). Figure 2 shows the effect of sample size k on the Chinese-English validation set. It is clear that BLEU scores consistently rise with the increase of k. However, we find that a sample size larger than 100 (e.g., k = 200) usually does not lead to significant improvements and increases the GPU memory requirement. Therefore, we set k = 100 in the following experiments. 4.4 Effect of Loss Function As our approach is capable of incorporating evaluation metrics as loss functions, we investigate the effect of different loss functions on BLEU, TER 1687 0 5 10 15 20 25 30 35 40 0 50 100 150 200 250 300 BLEU score Training time (hours) MRT MLE Figure 3: Comparison of training time on the Chinese-English validation set. and NIST scores on the Chinese-English validation set. As shown in Table 2, negative smoothed sentence-level BLEU (i.e, −sBLEU) leads to statistically significant improvements over MLE (p < 0.01). Note that the loss functions are all defined at the sentence level while evaluation metrics are calculated at the corpus level. This discrepancy might explain why optimizing with respect to sTER does not result in the lowest TER on the validation set. As −sBLEU consistently improves all evaluation metrics, we use it as the default loss function in our experiments. 4.5 Comparison of Training Time We used a cluster with 20 Telsa K40 GPUs to train the NMT model. For MLE, it takes the cluster about one hour to train 20,000 mini-batches, each of which contains 80 sentences. The training time for MRT is longer than MLE: 13,000 mini-batches can be processed in one hour on the same cluster. Figure 3 shows the learning curves of MLE and MRT on the validation set. For MLE, the BLEU score reaches its peak after about 20 hours and then keeps going down dramatically. Initializing with the best MLE model, MRT increases BLEU scores dramatically within about 30 hours. 3 Afterwards, the BLEU score keeps improving gradually but there are slight oscillations. 4.6 Results on Chinese-English Translation 4.6.1 Comparison of BLEU Scores Table 3 shows BLEU scores on Chinese-English datasets. For RNNSEARCH, we follow Luong 3Although it is possible to initialize with a randomized model, it takes much longer time to converge. 0 10 20 30 40 50 0 10 20 30 40 50 60 70 BLEU score Input sentence length MRT MLE Moses Figure 4: BLEU scores on the Chinese-English test set over various input sentence lengths. 0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 Output sentence length Input sentence length Moses MRT MLE Figure 5: Comparison of output sentences lengths on the Chinese-English test set. et al. (2015b) to handle rare words. We find that introducing minimum risk training into neural MT leads to surprisingly substantial improvements over MOSES and RNNSEARCH with MLE as the training criterion (up to +8.61 and +7.20 BLEU points, respectively) across all test sets. All the improvements are statistically significant. 4.6.2 Comparison of TER Scores Table 4 gives TER scores on Chinese-English datasets. The loss function used in MRT is −sBLEU. MRT still obtains dramatic improvements over MOSES and RNNSEARCH with MLE as the training criterion (up to -10.27 and -8.32 TER points, respectively) across all test sets. All the improvements are statistically significant. 4.6.3 BLEU Scores over Sentence Lengths Figure 4 shows the BLEU scores of translations generated by MOSES, RNNSEARCH with MLE, 1688 System Training MT06 MT02 MT03 MT04 MT05 MT08 MOSES MERT 32.74 32.49 32.40 33.38 30.20 25.28 RNNSEARCH MLE 30.70 35.13 33.73 34.58 31.76 23.57 MRT 37.34 40.36 40.93 41.37 38.81 29.23 Table 3: Case-insensitive BLEU scores on Chinese-English translation. System Training MT06 MT02 MT03 MT04 MT05 MT08 MOSES MERT 59.22 62.97 62.44 61.20 63.44 62.36 RNNSEARCH MLE 60.74 58.94 60.10 58.91 61.74 64.52 MRT 52.86 52.87 52.17 51.49 53.42 57.21 Table 4: Case-insensitive TER scores on Chinese-English translation. MLE vs. MRT < = > evaluator 1 54% 24% 22% evaluator 2 53% 22% 25% Table 5: Subjective evaluation of MLE and MRT on Chinese-English translation. and RNNSEARCH with MRT on the ChineseEnglish test set with respect to input sentence lengths. While MRT consistently improves over MLE for all lengths, it achieves worse translation performance for sentences longer than 60 words. One reason is that RNNSEARCH tends to produce short translations for long sentences. As shown in Figure 5, both MLE and MRE generate much shorter translations than MOSES. This results from the length limit imposed by RNNSEARCH for efficiency reasons: a sentence in the training set is no longer than 50 words. This limit deteriorates translation performance because the sentences in the test set are usually longer than 50 words. 4.6.4 Subjective Evaluation We also conducted a subjective evaluation to validate the benefit of replacing MLE with MRT. Two human evaluators were asked to compare MLE and MRT translations of 100 source sentences randomly sampled from the test sets without knowing from which system a candidate translation was generated. Table 5 shows the results of subjective evaluation. The two human evaluators made close judgements: around 54% of MLE translations are worse than MRE, 23% are equal, and 23% are better. 4.6.5 Example Translations Table 6 shows some example translations. We find that MOSES translates a Chinese string “yi wei fuze yu pingrang dangju da jiaodao de qian guowuyuan guanyuan” that requires long-distance reordering in a wrong way, which is a notorious challenge for statistical machine translation. In contrast, RNNSEARCH-MLE seems to overcome this problem in this example thanks to the capability of gated RNNs to capture long-distance dependencies. However, as MLE uses a loss function defined only at the word level, its translation lacks sentence-level consistency: “chinese” occurs twice while “two senate” is missing. By optimizing model parameters directly with respect to sentence-level BLEU, RNNSEARCH-MRT seems to be able to generate translations more consistently at the sentence level. 4.7 Results on English-French Translation Table 7 shows the results on English-French translation. We list existing end-to-end NMT systems that are comparable to our system. All these systems use the same subset of the WMT 2014 training corpus and adopt MLE as the training criterion. They differ in network architectures and vocabulary sizes. Our RNNSEARCH-MLE system achieves a BLEU score comparable to that of Jean et al. (2015). RNNSEARCH-MRT achieves the highest BLEU score in this setting even with a vocabulary size smaller than Luong et al. (2015b) and Sutskever et al. (2014). Note that our approach does not assume specific architectures and can in principle be applied to any NMT systems. 4.8 Results on English-German Translation Table 8 shows the results on English-German translation. Our approach still significantly out1689 Source meiguo daibiao tuan baokuo laizi shidanfu daxue de yi wei zhongguo zhuanjia , liang ming canyuan waijiao zhengce zhuli yiji yi wei fuze yu pingrang dangju da jiaodao de qian guowuyuan guanyuan . Reference the us delegation consists of a chinese expert from the stanford university , two senate foreign affairs policy assistants and a former state department official who was in charge of dealing with pyongyang authority . MOSES the united states to members of the delegation include representatives from the stanford university , a chinese expert , two assistant senate foreign policy and a responsible for dealing with pyongyang before the officials of the state council . RNNSEARCH-MLE the us delegation comprises a chinese expert from stanford university , a chinese foreign office assistant policy assistant and a former official who is responsible for dealing with the pyongyang authorities . RNNSEARCH-MRT the us delegation included a chinese expert from the stanford university , two senate foreign policy assistants , and a former state department official who had dealings with the pyongyang authorities . Table 6: Example Chinese-English translations. “Source” is a romanized Chinese sentence, “Reference” is a gold-standard translation. “MOSES” and “RNNSEARCH-MLE” are baseline SMT and NMT systems. “RNNSEARCH-MRT” is our system. System Architecture Training Vocab BLEU Existing end-to-end NMT systems Bahdanau et al. (2015) gated RNN with search MLE 30K 28.45 Jean et al. (2015) gated RNN with search 30K 29.97 Jean et al. (2015) gated RNN with search + PosUnk 30K 33.08 Luong et al. (2015b) LSTM with 4 layers 40K 29.50 Luong et al. (2015b) LSTM with 4 layers + PosUnk 40K 31.80 Luong et al. (2015b) LSTM with 6 layers 40K 30.40 Luong et al. (2015b) LSTM with 6 layers + PosUnk 40K 32.70 Sutskever et al. (2014) LSTM with 4 layers 80K 30.59 Our end-to-end NMT systems this work gated RNN with search MLE 30K 29.88 gated RNN with search MRT 30K 31.30 gated RNN with search + PosUnk MRT 30K 34.23 Table 7: Comparison with previous work on English-French translation. The BLEU scores are casesensitive. “PosUnk” denotes Luong et al. (2015b)’s technique of handling rare words. System Architecture Training BLEU Existing end-to-end NMT systems Jean et al. (2015) gated RNN with search MLE 16.46 Jean et al. (2015) gated RNN with search + PosUnk 18.97 Jean et al. (2015) gated RNN with search + LV + PosUnk 19.40 Luong et al. (2015a) LSTM with 4 layers + dropout + local att. + PosUnk 20.90 Our end-to-end NMT systems this work gated RNN with search MLE 16.45 gated RNN with search MRT 18.02 gated RNN with search + PosUnk MRT 20.45 Table 8: Comparison with previous work on English-German translation. The BLEU scores are casesensitive. 1690 performs MLE and achieves comparable results with state-of-the-art systems even though Luong et al. (2015a) used a much deeper neural network. We believe that our work can be applied to their architecture easily. Despite these significant improvements, the margins on English-German and English-French datasets are much smaller than Chinese-English. We conjecture that there are two possible reasons. First, the Chinese-English datasets contain four reference translations for each sentence while both English-French and English-German datasets only have single references. Second, Chinese and English are more distantly related than English, French and German and thus benefit more from MRT that incorporates evaluation metrics into optimization to capture structural divergence. 5 Related Work Our work originated from the minimum risk training algorithms in conventional statistical machine translation (Och, 2003; Smith and Eisner, 2006; He and Deng, 2012). Och (2003) describes a smoothed error count to allow calculating gradients, which directly inspires us to use a parameter α to adjust the smoothness of the objective function. As neural networks are non-linear, our approach has to minimize the expected loss on the sentence level rather than the loss of 1-best translations on the corpus level. Smith and Eisner (2006) introduce minimum risk annealing for training log-linear models that is capable of gradually annealing to focus on the 1-best hypothesis. He et al. (2012) apply minimum risk training to learning phrase translation probabilities. Gao et al. (2014) leverage MRT for learning continuous phrase representations for statistical machine translation. The difference is that they use MRT to optimize a sub-model of SMT while we are interested in directly optimizing end-to-end neural translation models. The Mixed Incremental Cross-Entropy Reinforce (MIXER) algorithm (Ranzato et al., 2015) is in spirit closest to our work. Building on the REINFORCE algorithm proposed by Williams (1992), MIXER allows incremental learning and the use of hybrid loss function that combines both REINFORCE and cross-entropy. The major difference is that Ranzato et al. (2015) leverage reinforcement learning while our work resorts to minimum risk training. In addition, MIXER only samples one candidate to calculate reinforcement reward while MRT generates multiple samples to calculate the expected risk. Figure 2 indicates that multiple samples potentially increases MRT’s capability of discriminating between diverse candidates and thus benefit translation quality. Our experiments confirm their finding that taking evaluation metrics into account when optimizing model parameters does help to improve sentence-level text generation. More recently, our approach has been successfully applied to summarization (Ayana et al., 2016). They optimize neural networks for headline generation with respect to ROUGE (Lin, 2004) and also achieve significant improvements, confirming the effectiveness and applicability of our approach. 6 Conclusion In this paper, we have presented a framework for minimum risk training in end-to-end neural machine translation. The basic idea is to minimize the expected loss in terms of evaluation metrics on the training data. We sample the full search space to approximate the posterior distribution to improve efficiency. Experiments show that MRT leads to significant improvements over maximum likelihood estimation for neural machine translation, especially for distantly-related languages such as Chinese and English. In the future, we plan to test our approach on more language pairs and more end-to-end neural MT systems. It is also interesting to extend minimum risk training to minimum risk annealing following Smith and Eisner (2006). As our approach is transparent to loss functions and architectures, we believe that it will also benefit more end-to-end neural architectures for other NLP tasks. Acknowledgments This work was done while Shiqi Shen and Yong Cheng were visiting Baidu. Maosong Sun and Hua Wu are supported by the 973 Program (2014CB340501 & 2014CB34505). Yang Liu is supported by the National Natural Science Foundation of China (No.61522204 and No.61432013) and the 863 Program (2015AA011808). This research is also supported by the Singapore National Research Foundation under its International Research Centre@Singapore Funding Initiative and administered by the IDM Programme. 1691 References Ayana, Shiqi Shen, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with minimum risk training. arXiv:1604.01904. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguisitics. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of HLT. Jianfeng Gao, Xiaodong He, Wen tao Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In Proceedings of ACL. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv:1503.03535. Xiaodong He and Li Deng. 2012. Maximum expected bleu training of phrase and lexicon translation models. In Proceedings of ACL. Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of ACL. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of EMNLP. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of EMNLP. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL. Alon Lavie and Michael Denkowski. 2009. The mereor metric for automatic evaluation of machine translation. Machine Translation. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of ACL. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015a. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proceedings of ACL. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv:1511.06732v1. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv:1511.06709. David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In Proceedings of ACL. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of AMTA. Andreas Stolcke. 2002. Srilm - am extensible language modeling toolkit. In Proceedings of ICSLP. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Ronald J. Willams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning. 1692
2016
159
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 161–171, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Transition-Based System for Joint Lexical and Syntactic Analysis Matthieu Constant Universit´e Paris-Est, LIGM (UMR 8049) Alpage, INRIA, Universit´e Paris Diderot Paris, France [email protected] Joakim Nivre Uppsala University Dept. of Linguistics and Philology Uppsala, Sweden [email protected] Abstract We present a transition-based system that jointly predicts the syntactic structure and lexical units of a sentence by building two structures over the input words: a syntactic dependency tree and a forest of lexical units including multiword expressions (MWEs). This combined representation allows us to capture both the syntactic and semantic structure of MWEs, which in turn enables deeper downstream semantic analysis, especially for semicompositional MWEs. The proposed system extends the arc-standard transition system for dependency parsing with transitions for building complex lexical units. Experiments on two different data sets show that the approach significantly improves MWE identification accuracy (and sometimes syntactic accuracy) compared to existing joint approaches. 1 Introduction Multiword expressions (MWEs) are sequences of words that form non-compositional semantic units. Their identification is crucial for semantic analysis, which is traditionally based on the principle of compositionality. For instance, the meaning of cut the mustard cannot be compositionally derived from the meaning of its elements and the expression therefore has to be treated as a single unit. Since Sag et al. (2002), MWEs have attracted growing attention in the NLP community. Identifying MWEs in running text is challenging for several reasons (Baldwin and Kim, 2010; Seretan, 2011; Ramisch, 2015). First, MWEs encompass very diverse linguistic phenomena, such as complex grammatical words (in spite of, because of), nominal compounds (light house), noncanonical prepositional phrases (above board), verbal idiomatic expressions (burn the midnight oil), light verb constructions (have a bath), multiword names (New York), and so on. They can also be discontiguous in the sense that the sequence can include intervening elements (John pulled Mary’s leg). They may also vary in their morphological forms (hot dog, hot dogs), in their lexical elements (lose one’s mind/head), and in their syntactic structure (he took a step, the step he took). The semantic processing of MWEs is further complicated by the fact that there exists a continuum between entirely non-compositional expressions (piece of cake) and almost free expressions (traffic light). Many MWEs are indeed semicompositional. For example, the compound white wine denotes a type of wine, but the color of the wine is not white, so the expression is only partially transparent. In the light verb construction take a nap, nap keeps its usual meaning but the meaning of the verb take is bleached. In addition, the noun can be compositionally modified as in take a long nap. Such cases show that MWEs may be decomposable and partially analyzable, which implies the need for predicting their internal structure in order to compute their meaning. From a syntactic point of view, MWEs often have a regular structure and do not need special syntactic annotation. Some MWEs have an irregular structure, such as by and large which on the surface is a coordination of a preposition and an adjective. They are syntactically as well as semantically non-compositional and cannot be represented with standard syntactic structures, as stated in Candito and Constant (2014). Many of these irregular MWEs are complex grammatical words like because of, in spite of and in order to – fixed (grammatical) MWEs in the sense of Sag et al. (2002). In some treebanks, these are annotated using special structures and labels because they can161 not be modified or decomposed. We hereafter use the term fixed MWE to refer to either fixed or irregular MWEs. In this paper, we present a novel representation that allows both regular and irregular MWEs to be adequately represented without compromising the syntactic representation. We then show how this representation can be processed using a transitionbased system that is a mild extension of a standard dependency parser. This system takes as input a sentence consisting of a sequence of tokens and predicts its syntactic dependency structure as well as its lexical units (including MWEs). The resulting structure combines two factorized substructures: (i) a standard tree representing the syntactic dependencies between the lexical elements of the sentence and (ii) a forest of lexical trees including MWEs identified in the sentence. Each MWE is represented by a constituency-like tree, which permits complex lexical units like MWE embeddings (for example, [[Los Angeles ] Lakers], I will [take a [rain check]]). The syntactic and lexical structures are factorized in the sense that they share lexical elements: both tokens and fixed MWEs. The proposed parsing model is an extension of a classical arc-standard parser, integrating specific transitions for MWE detection. In order to deal with the two linguistic dimensions separately, it uses two stacks (instead of one). It is synchronized by using a single buffer, in order to handle the factorization of the two structures. It also includes different hard constraints on the system in order to reduce ambiguities artificially created by the addition of new transitions. To the best of our knowledge, this system is the first transition-based parser that includes a specific mechanism for handling MWEs in two dimensions. Previous related research has usually proposed either pipeline approaches with MWE identification performed either before or after dependency parsing (Kong et al., 2014; Vincze et al., 2013a) or workaround joint solutions using off-the-shelf parsers trained on dependency treebanks where MWEs are annotated by specific subtrees (Nivre and Nilsson, 2004; Eryi˘git et al., 2011; Vincze et al., 2013b; Candito and Constant, 2014; Nasr et al., 2015). 2 Syntactic and Lexical Representations A standard dependency tree represents syntactic structure by establishing binary syntactic relations between words. This is an adequate representation of both syntactic and lexical structure on the assumption that words and lexical units are in a one-to-one correspondence. However, as argued in the introduction, this assumption is broken by the existence of MWEs, and we therefore need to distinguish lexical units as distinct from words. In the new representation, each lexical unit – whether a single word or an MWE – is associated with a lexical node, which has linguistic attributes such as surface form, lemma, part-ofspeech tag and morphological features. With an obvious reuse of terminology from context-free grammar, lexical nodes corresponding to MWEs are said to be non-terminal, because they have other lexical nodes as children, while lexical nodes corresponding to single words are terminal (and do not have any children). Some lexical nodes are also syntactic nodes, that is, nodes of the syntactic dependency tree. These nodes are either non-terminal nodes corresponding to (complete) fixed MWEs or terminal nodes corresponding to words that do not belong to a fixed MWE. Syntactic nodes are connected into a tree structure by binary, asymmetric dependency relations pointing from a head node to a dependent node. Figure 1 shows the representation of the sentence the prime minister made a few good decisions. It contains three non-terminal lexical nodes: one fixed MWE (a few), one contiguous non-fixed MWE (prime minister) and one discontiguous non-fixed MWE (made decisions). Of these, only the first is also a syntactic node. Note that, for reasons of clarity, we have suppressed the lexical children of the fixed MWE in Figure 1. (The non-terminal node corresponding to a few has the lexical children a and few.) For the same reason, we are not showing the linguistic attributes of lexical nodes. For example, the node made-decisions has the following set of features: surface-form=‘made decisions’, lemma=‘make decision’, POS=‘V’. Nonfixed MWEs have regular syntax and their components might have some autonomy. For example, in the light verb construction made-decisions, the noun decisions is modified by the adjective good that is not an element of the MWE. The proposed representation of fixed MWEs is an alternative to using special dependency labels as has often been the case in the past (Nivre and Nilsson, 2004; Eryi˘git et al., 2011). In addition 162 the prime minister made a few good decisions det mod subj mod obj mod made-decisions prime-minister Figure 1: Representation of syntactic and lexical structure. she took a rain check subj obj det mod took-rain-check rain-check Figure 2: Lexical structure of embedded MWEs. to special labels, MWEs are then represented as a flat subtree of the syntactic tree. The root of the subtree is the left-most or right-most element of the MWE, and all the other elements are attached to this root with dependencies having special labels. Despite the special labels, these subtrees look like ordinary dependency structures and may confuse a syntactic parser. In our representation, fixed MWEs are instead represented by nodes that are atomic with respect to syntactic structure (but complex with respect to lexical structure), which makes it easier to store linguistic attributes that belong to the fixed MWE and cannot be derived from its components. The new representation also allows us to represent the hierarchical structure of embedded MWEs. Figure 2 provides an analysis of she took a rain check that includes such an embedding. The lexical node took-rain-check corresponds to a light verb construction where the object is a compound noun that keeps its semantic interpretation whereas the verb has a neutral value. One of its children is the lexical node rain-check corresponding to a compound noun. Let us now define the representation formally. Given a sentence x = x1, . . . , xn consisting of n tokens, the syntactic and lexical representation is a quadruple (V, F, N, A), where 1. V is the set of terminal nodes, corresponding one-to-one to the tokens x1, . . . , xn, 2. F is a set of n-ary trees on V , with each tree corresponding to a fixed MWE and the root labeled with the part-of-speech tag for the MWE, 3. N is a set of n-ary trees on F, with each tree corresponding to a non-fixed MWE and the root labeled with the part-of-speech tag for the MWE, 4. A is a set of labeled dependency arcs defining a tree over F. This is a generalization of the standard definition of a dependency tree (see, for example, K¨ubler et al. (2009)), where the dependency structure is defined over an intermediate layer of lexical nodes (F) instead of directly on the terminal nodes (V ), with an additional layer of non-fixed MWEs added on top. To exemplify the definition, here are the formal structures corresponding to the representation visualized in Figure 1. V = {1, 2, 3, 4, 5, 6, 7, 8} F = {1, 2, 3, 4, A(5, 6), 7, 8} N = {1, N(2, 3), V(4, 8), A(5, 6), 7} A = {(3, det, 1), (3, mod, 2), (4, subj, 3), (4, obj, 8), (8, mod, A(5, 6)), (8, mod, 7)} Terminal nodes are represented by integers corresponding to token positions, while trees are represented by n-ary terms t(c1, . . . , cn), where t is a part-of-speech tag and c1, . . . , cn are the subtrees immediately dominated by the root of the tree. The total set of lexical nodes is L = V ∪F ∪N, where V contains the terminal and (F ∪N) −V the non-terminal lexical nodes. The set of syntactic nodes is simply F. It is worth noting that the representation imposes some limitations on what MWEs can be represented. In particular, we can only represent overlapping MWEs if they are cases of embedding, that is, cases where one MWE is properly contained in the other. For example, in an example like she took a walk then a bath, it might be argued that took should be part of two lexical units: took-walk and took-bath. This cannot currently be represented. By contrast, we can accommodate cases where two lexical units are interleaved, as in the French example il prend un cachet et demi, with the two units prend-cachet and un-et-demi, which occur in the crossed pattern A1 B1 A2 B2. However, while these cases can be represented in 163 principle, the parsing model we propose will not be capable of processing them. Finally, it is worth noting that, although our representation in general allows lexical nodes with arbitrary branching factor for flat MWEs, it is often convenient for parsing to assume that all trees are binary (Crabb´e, 2014). For the rest of the paper, we therefore assume that non-binary trees are always transformed into equivalent binary trees using either right or left binarization. Such transformations add intermediate temporary nodes that are only used for internal processing. 3 Transition-Based Model A transition-based parser is based on three components: a transition system for mapping sentences to their representation, a model for scoring different transition sequences (derivations), and a search algorithm for finding the highest scoring transition sequence for a given input sentence. Following Nivre (2008), we define a transition system as a quadruple S = (C, T, cs, Ct) where: 1. C is a set of configurations, 2. T is a set of transitions, each of which is a partial function t : C →C, 3. cs is an initialization function that maps each input sentence x to an initial configuration cs(x) ∈C, 4. Ct ⊆C is a set of terminal configurations. A transition sequence for a sentence x is a sequence of configurations C0,m = c0, . . . , cm such that c0 = cs(x), cm ∈Ct, and for every ci (0 ≤i < m) there is some transition t ∈T such that t(ci) = ci+1. Every transition sequence defines a representation for the input sentence. Training a transition-based parser means training the model for scoring transition sequences. This requires an oracle that determines what is an optimal transition sequence given an input sentence and the correct output representation (as given by treebank). Static oracles define a single unique transition sequence for each input-output pair. Dynamic oracles allow more than one optimal transition sequence and can also score nonoptimal sequences (Goldberg and Nivre, 2013). Once a scoring model has been trained, parsing is usually performed as best-first search under this model, using greedy search or beam search. 3.1 Arc-Standard Dependency Parsing Our starting point is the arc-standard transition system for dependency parsing first defined in Nivre (2004) and represented schematically in Figure 3. A configuration in this system consists of a triple c = (σ, β, A), where σ is a stack containing partially processed nodes, β is a buffer containing remaining input nodes, and A is a set of dependency arcs. The initialization function maps x = x1, . . . , xn to cs(x) = ([ ], [1, . . . , n], { }), and the set Ct of terminal configurations contains any configuration of the form c = ([i], [ ], A). The dependency tree defined by such a terminal configuration is ({1, . . . , n}, A). There are three possible transitions: • Shift takes the first node in the buffer and pushes it onto the stack. • Right-Arc(k) adds a dependency arc (i, k, j) to A, where j is the first and i the second element of the stack, and removes j from the stack. • Left-Arc(k) adds a dependency arc (j, k, i) to A, where j is the first and i the second element of the stack, and removes i from the stack. A transition sequence in the arc-standard system builds a projective dependency tree over the set of terminal nodes in V . The tree is built bottom-up by attaching dependents to their head and removing them from the stack until only the root of the tree remains on the stack. 3.2 Joint Syntactic and Lexical Analysis To perform joint syntactic and lexical analysis we need to be able to build structure in two parallel dimensions: the syntactic dimension, represented by a dependency tree, and the lexical dimension, represented by a forest of (binary) trees. The two dimensions share the token-level representation, as well as the level of fixed MWEs, but the syntactic tree and the non-fixed MWEs are independent. We extend the parser configuration to use two stacks, one for each dimension, but only one buffer. In addition, we need not only a set of dependency arcs, but also a set of lexical units. A configuration in the new system therefore consists of a quintuple c = (σ1, σ2, β, A, L), where σ1 and σ2 are stacks containing partially processed nodes (which may now be complex MWEs), β is a buffer containing remaining input nodes (which 164 Initial: ([ ], [0, . . . , n], { }) Terminal: ([i], [ ], A) Shift: (σ, i|β, A) ⇒ (σ|i, β, A) Right-Arc(k): (σ|i|j, β, A) ⇒ (σ|i, β, A ∪{(i, k, j)}) Left-Arc(k): (σ|i|j, β, A) ⇒ (σ|j, β, A ∪{(j, k, i)}) Figure 3: Arc-standard transition system. Initial: ([ ], [ ], [0, . . . , n], { }, { }) Terminal: ([x], [ ], [ ], A, L) Shift: (σ1, σ2, i|β, A, L) ⇒ (σ1|i, σ2|i, β, A, L) Right-Arc(k): (σ1|x|y, σ2, β, A, L) ⇒ (σ1|x, σ2, β, A ∪{(x, k, y)}, L) Left-Arc(k): (σ1|x|y, σ2, β, A, L) ⇒ (σ1|y, σ2, β, A ∪{(y, k, x)}, L) MergeF (t): (σ1|x|y, σ2|x|y, β, A, L) ⇒ (σ1|t(x, y), σ2|t(x, y), β, A, L) MergeN(t): (σ1, σ2|x|y, β, A, L) ⇒ (σ1, σ2|t(x, y), β, A, L) Complete: (σ1, σ2|x, β, A, L) ⇒ (σ1, σ2, β, A, L ∪{x}) Figure 4: Transition system for joint syntactic and lexical analysis. are always tokens), A is a set of dependency arcs, and L is a set of lexical units (tokens or MWEs). The initialization function maps x = x1, . . . , xn to cs(x) = ([ ], [ ], [1, . . . , n], { }, { }), and the set Ct of terminal configurations contains any configuration of the form c = ([x], [ ], [ ], A, L). The dependency tree defined by such a terminal configuration is (F, A), and the set of lexical units is V ∪L. Note that the set F of syntactic nodes is not explicitly represented in the configuration but is implicitly defined by A. Similarly, the set L only contains F ∪N. The new transition system is shown in Figure 4. There are now six possible transitions: • Shift takes the first node in the buffer and pushes it onto both stacks. This guarantees that the two dimensions are synchronized at the token level. • Right-Arc(k) adds a dependency arc (x, k, y) to A, where y is the first and x the second element of the syntactic stack (σ1), and removes y from this stack. It does not affect the lexical stack (σ2).1 • Left-Arc(k) adds a dependency arc (y, k, x) to A, where y is the first and x the second element of the syntactic stack (σ1), and removes x from this stack. Like Right-Arc(k), it does 1We use the variables x and y, instead of i and j, because the stack elements can now be complex lexical units as well as simple tokens. not affect the lexical stack (σ2). • MergeF (t) applies in a configuration where the two top elements x and y are identical on both stacks and combines these elements into a tree t(x, y) representing a fixed MWE with part-of-speech tag t. Since it operates on both stacks, the new element will be a syntactic node as well as a lexical node. • MergeN(t) combines the two top elements x and y on the lexical stack (σ2) into a tree t(x, y) representing a non-fixed MWE with part-of-speech tag t. Since it only operates on the lexical stack, the new element will not be a syntactic node. • Complete moves the top element x on the lexical stack (σ2) to L, making it a final lexical unit in the output representation. Note that x can be a simple token, a fixed MWE (created on both stacks), or a non-fixed MWE (created only on the lexical stack). A transition sequence in the new system derives the set of lexical nodes and simultaneously builds a projective dependency tree over the set of syntactic nodes. By way of example, Figure 5 shows the transition sequence for the example in Figure 1. 3.3 Implicit Completion The system presented above has one potential drawback: it needs a separate Complete transition for every lexical unit, even in the default case 165 Transition Configuration ([ ], [ ], [1, 2, 3, 4, 5, 6, 7, 8], A0 = { }, L0 = { }) Shift ⇒ ([1], [1], [2, 3, 4, 5, 6, 7, 8], A0, L0) Complete ⇒ ([1], [ ], [2, 3, 4, 5, 6, 7, 8], A0, L1 = L0 ∪{1}) Shift ⇒ ([1, 2], [2], [3, 4, 5, 6, 7, 8], A0, L1) Shift ⇒ ([1, 2, 3], [2, 3], [4, 5, 6, 7, 8], A0, L1) MergeN(N) ⇒ ([1, 2, 3], [N(2, 3)], [4, 5, 6, 7, 8], A1, L1) Complete ⇒ ([1, 2, 3], [ ], [4, 5, 6, 7, 8], A0, L2 = L1 ∪{N(2, 3)}) Left-Arc(mod) ⇒ ([1, 3], [ ], [4, 5, 6, 7, 8], A1 = A0 ∪{(3, mod, 2)}, L2) Left-Arc(det) ⇒ ([3], [ ], [4, 5, 6, 7, 8], A2 = A1 ∪{(3, det, 1)}, L2) Shift ⇒ ([3, 4], [4], [5, 6, 7, 8], A2, L2) Left-Arc(subj) ⇒ ([4], [4], [5, 6, 7, 8], A3 = A2 ∪{(4, subj, 3)}, L2) Shift ⇒ ([4, 5], [4, 5], [6, 7, 8], A3, L2) Shift ⇒ ([4, 5, 6], [4, 5, 6], [7, 8], A3, L2) MergeF (A) ⇒ ([4, A(5, 6)], [4, A(5, 6)], [7, 8], A3, L2) Complete ⇒ ([4, A(5, 6)], [4], [7, 8], A3, L3 = L2 ∪{A(5, 6)}) Shift ⇒ ([4, A(5, 6), 7], [4, 7], [8], A3, L3) Complete ⇒ ([4, A(5, 6), 7], [4], [8], A3, L4 = L3 ∪{7}) Shift ⇒ ([4, A(5, 6), 7, 8], [4, 8], [ ], A3, L4) Left-Arc(mod) ⇒ ([4, A(5, 6), 8], [4, 8], [ ], A4 = A3 ∪{(8, mod, 7)}, L4) Left-Arc(mod) ⇒ ([4, 8], [4, 8], [ ], A5 = A4 ∪{(8, mod, A(5, 6))}, L4) MergeN(V) ⇒ ([4, 8], [V(4, 8)], [ ], A5, L4) Complete ⇒ ([4, 8], [ ], [ ], A5, L5 = L4 ∪{V(4, 8)}) Right-Arc(obj) ⇒ ([4], [ ], [ ], A6 = A5 ∪{(4, obj, 8)}, L5) Figure 5: Transition sequence for joint syntactic and lexical analysis. when a lexical unit is just a token. This makes sequences much longer and increases the inherent ambiguity. One way to deal with this problem is to make the Complete transition implicit and deterministic, so that it is not scored by the model (or predicted by a classifier in the case of deterministic parsing) but is performed as a side effect of the Right-Arc and Left-Arc transitions. Every time we apply one of these transitions, we check whether the dependent x of the new arc is part of a unit y on the lexical stack satisfying one of the following conditions: (i) x = y; (ii) x is a lexical child of y and every lexical node z in y either has a syntactic head in A or is the root of the dependency tree. If (i) or (ii) is satisfied, we move y from the lexical stack to the set L of lexical units as a side effect of the arc transition. 4 Experiments This section provides experimental results obtained with a simple implementation of our system using a greedy search parsing algorithm and a linear model trained with an averaged perceptron with shuffled examples and a static oracle. More precisely, the static oracle is defined using the following transition priorities: MergeF > MergeN > Complete > LeftArc > RightArc > Shift. At each state of the training phase, the static oracle selects the valid transition that has the higher priority. We evaluated the two variants of the system, namely Explicit and Implicit, with explicit and implicit completion, respectively. They were compared against the joint approach proposed in Candito and Constant (2014) that we applied to an arcstandard parser, instead of a graph-based parser. The parser is trained on a treebank where MWE status and grammatical function are concatenated in arc labels. We consider it as the Baseline. We used classical transition-based parsing features consisting of patterns combining linguistic attributes of nodes on the stacks and the buffer, as well as processed subtrees and transition history. We can note that the joint systems do not contain features sharing elements of both stacks. Preliminary tuning experiments did not show gains when using such features. We also compared these systems against weaker ones, obtained by disabling some transitions and using one stack only. Two systems, namely Syntactic-baseline and Syntactic only predict the syntactic nodes and the dependency structure by using respectively a baseline parser and our system where neither the lexical stack nor the MergeN and Complete transitions are used. The latter one is an implementation of the proposal in Nivre (2014). Two systems are devoted only to the lexical layer: Lexical only recognizes the lexical units (only the lexical stack and the MergeN and Complete transitions are activated); Fixed only identifies the fixed expressions. 166 Corpus EWT FTB Train Test Train Dev Test # sent. 3,312 500 14,759 1,235 2,541 # tokens 48,408 7,171 443,113 38,820 75,216 # MWEs 2,996 401 23,556 2,119 4,043 # fixed 10,987 925 1,992 Table 1: Dataset statistics. We also implemented pipeline systems where: (i) fixed MWEs are identified by applying only the Fixed system; (ii) elements of predicted MWEs are merged into single tokens; (iii) the retokenized text is parsed using the Baseline or Implicit systems trained on a dataset where fixed MWEs consist of single tokens. We carried out our experiments on two different datasets annotating both the syntactic structure and the MWEs: the French Treebank [FTB] (Abeill´e et al., 2003) and the STREUSLE corpus (Schneider et al., 2014b) combined with the English Web Treebank [EWT] (Bies et al., 2012). They are commonly used for evaluating the most recent MWE-aware dependency parsers and supervised MWE identification systems. Concerning the FTB, we used the dependency version developed in Candito and Constant (2014) derived from the SPMRL shared task version (Seddah et al., 2013). Fixed and non-fixed MWEs are distinguished, but are limited to contiguous ones only. The STREUSLE corpus (Schneider et al., 2014b) corresponds to a subpart of the English Web Treebank (EWT). It consists of reviews and is comprehensively annotated in contiguous and discontiguous MWEs. Fixed and non-fixed expressions are not distinguished though the distinction between non-compositional and collocational MWEs is made. This implies that the MergeF transition is not used on this dataset. Practically, we used the LTH converter (Johansson and Nugues, 2007) to obtain the dependency version of the EWT constituent version. We also used the predicted linguistic attributes used in Constant and Le Roux (2015) and in Constant et al. (2016). Both datasets include predicted POS tags, lemmas and morphology, as well as features computed from compound dictionary lookup. None of them is entirely satisfying with respect to our model, but they allow us to evaluate the feasibility of the approach. Statistics on the two datasets are provided in Table 1. Results are provided in Table 2 for French and in Table 3 for English. In order to evaluate the syntactic layer, we used classical UAS and LAS metrics. Before evaluation, merged units were automatically decomposed in the form of flat subtrees using specific arcs as in Seddah et al. (2013), so all systems can be evaluated and compared at the token level. MWE identification is evaluated with the F-score of the MWE segmentation, namely MWE for all MWEs and FMWE for fixed MWEs only. An MWE segment corresponds to the set of its component positions in the input token sequence. First, results show that our joint system consistently and significantly outperforms the baseline in terms of MWE identification on both datasets. The merge transitions play a key role. In terms of syntax, the Explicit system does not have any positive impact (on par or degraded scores), whereas the Implicit system allows us to obtain slightly better results on French and a significant improvement on English. The very good performances on English might be explained by the fact that it contains a non-negligeable set of discontiguous MWEs which complicates the prediction of explicit Complete transitions. When compared with weaker systems, we can see that the addition of the lexical layer helps improve the prediction of the syntactic layer, which confirms results on symbolic parsing (Wehrli, 2014). The syntactic layer does not seem to impact the lexical layer prediction: we observe comparable results. This might be due to the fact that syntax is helpful for long-distance discontiguity only, which does not appear in our datasets (the English dataset contains MWEs with small gaps). Another explanation could also be that syntactic parsing accuracy is rather low due to the use of a simple greedy algorithm. Developing more advanced transition-based parsing methods like beam-search may help improve both syntactic parsing accuracy and MWE identification. When comparing joint systems with pipeline ones, we can see that preidentifying fixed MWEs seems to help MWE identification whereas syntactic parsing accuracy tends to be slightly lower. One hypothesis could be that MergeF transitions may confuse the prediction of MergeN transitions. When compared with existing state-of-the-art systems, we can see that the proposed systems achieve MWE identification scores that are comparable with the pipeline and joint approaches used in Candito and Constant (2014) with a graph167 DEV TEST System UAS LAS MWE FMWE UAS LAS MWE FMWE Baseline 86.28 83.67 77.2 83.2 84.85 82.67 75.5 81.9 Explicit 86.36 83.77 79.7 86.0 84.98 82.79 79.3 84.8 Implicit 86.61 84.10 80.0 86.2 85.04 82.93 78.4 84.3 Syntactic only -Baseline 86.31 83.69 83.5 84.89 82.70 82.0 Syntactic only 86.39 83.77 85.0 85.02 82.84 83.8 Lexical only 80.0 79.5 Fixed only 85.7 85.7 Pipeline (Fixed only →Baseline) 85.33 83.29 80.6 85.7 84.86 82.86 80.4 85.7 Pipeline (Fixed only →Implicit) 85.49 83.50 81.8 85.7 84.84 82.89 81.1 85.7 graph-based (Candito and Constant, 2014) 89.7 87.5 77.6 85.4 89.21 86.92 77.0 85.1 CRF+graph-based (Candito and Constant, 2014) 89.8 87.4 79.0 85.0 86.97 89.24 78.6 86.3 CRF (SPMRL) (Le Roux et al., 2014) 82.4 80.5 Table 2: Results on the FTB. To reduce bias due to training with shuffled examples, scores are averages of 3 different training/parsing runs. TRAIN Cross-validation TEST System UAS LAS MWE UAS LAS MWE Baseline 86.16 81.76 49.6 86.31 82.02 46.8 Explicit 86.25 82.09 52.9 86.05 81.68 53.4 Implicit 86.81 82.68 55.0 87.05 83.14 51.6 Syntactic only 86.35 82.23 86.41 82.20 Lexical only 54.5 53.6 (Schneider et al., 2014a) 53.85 Table 3: Results on the reviews part of the English Web Treebank, via cross-validation on the training set with 8 splits, and simple validation on the test set. based parser for French, and the base sequence tagger using a perceptron model with rich MWEdedicated features of Schneider et al. (2014a) for English. It reaches lower scores than the best simple CRF-based MWE tagging system of Le Roux et al. (2014). These scores are obtained on the SPMRL shared task version, though they are not entirely comparable with our system as they do not distinguish fixed from non-fixed MWEs. 5 Related work The present paper proposes a new representation for lexical and syntactic analysis in the framework of syntactic dependency parsing. Most existing MWE-aware dependency treebanks represent an MWE as a flat subtree of the syntactic tree with special labels, like in the UD treebanks (Nivre et al., 2016) or in the SPMRL shared task (Seddah et al., 2013), or in other individual treebanks (Nivre and Nilsson, 2004; Eryi˘git et al., 2011). Such representation enables MWE discontinuity, but the internal syntactic structure is not annotated. Candito and Constant (2014) proposed a representation where the irregular and regular MWEs are distinguished: irregular MWEs are integrated in the syntactic tree as above; regular MWEs are annotated in their component attributes while their internal structure is annotated in the syntactic tree. The Prague Dependency Treebank (Bejˇcek et al., 2013) has several interconnected annotation layers: morphological (m-layer), syntactic (a-layer) and semantic (t-layer). All these layers are trees that are interconnected. MWEs are annotated on the t-layer and are linked to an MWE lexicon (Bejˇcek and Straˇn´ak, 2010). Constant and Le Roux (2015) proposed a dependency representation of lexical segmentation allowing annotations of deeper phenomena like MWE nesting. More details on MWE-aware treebanks (including constituent ones) can be found in Ros´en et al. (2015). Statistical MWE-aware dependency parsing has received a growing interest since Nivre and Nilsson (2004). The main challenge resides in finding the best orchestration strategy. Past research has explored either pipeline or joint approaches. Pipeline strategies consist in positioning the MWE recognition either before or after the parser itself, as in Nivre and Nilsson (2004), Eryi˘git et al. (2011), Constant et al. (2013), and Kong et al. (2014) for pre-identification and as in Vincze et al. (2013a) for post-identification. Joint strategies have mainly consisted in using off-the-shelf 168 parsers and integrating MWE annotation in the syntactic structure, so that MWE identification is blind for the parser (Nivre and Nilsson, 2004; Eryi˘git et al., 2011; Seddah et al., 2013; Vincze et al., 2013b; Candito and Constant, 2014; Nasr et al., 2015). Our system includes a special treatment of MWEs using specific transitions in a classical transition-based system, in line with the proposal of Nivre (2014). Constant et al. (2016) also proposed a two-dimensional representation in the form of dependency trees anchored by the same words. The annotation of fixed MWEs is redundant on both dimensions, while they are shared in our representation. They propose, along with this representation, an adaptation of an easy-first parser able to predict both dimensions. Contrary to our system, there are no special mechanisms for treating MWEs. The use of multiple stacks to capture partly independent dimensions is inspired by the multiplanar dependency parser of G´omez-Rodr´ıguez and Nivre (2013). Our parsing strategy for (hierarchical) MWEs is very similar to the deterministic constituency parsing method of Crabb´e (2014). 6 Conclusion This paper proposes a transition-based system that extends a classical arc-standard parser to handle both lexical and syntactic analysis. It is based on a new representation having two linguistic layers sharing lexical nodes. Experimental results show that MWE identification is greatly improved with respect to the mainstream joint approach. This can be a useful starting point for several lines of research: implementing more advanced transitionbased techniques (beam search, dynamic oracles, deep learning); extending other classical transition systems like arc-eager and hybrid as well as handling non-projectivity. Acknowledgments The authors would like to thank Marie Candito for her fruitful inputs. This work has been partly funded by the French Agence Nationale pour la Recherche, through the PARSEME-FR project (ANR-14-CERA-0001). This work has also been supported in part by the PARSEME European COST Action (IC1207). References Anne Abeill´e, Lionel Cl´ement, and Franc¸ois Toussenel. 2003. Building a treebank for French. In Anne Abeill´e, editor, Treebanks. Kluwer, Dordrecht. Timothy Baldwin and Su Nam Kim. 2010. Multiword Expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing, pages 267–292. CRC Press, Taylor and Francis Group, Boca Raton, FL, USA, 2 edition. Eduard Bejˇcek and Pavel Straˇn´ak. 2010. Annotation of Multiword Expressions in the Prague Dependency Treebank. Language Resources and Evaluation, 44(1-2):7–21. Eduard Bejˇcek, Eva Hajiˇcov´a, Jan Hajiˇc, Pavl´ına J´ınov´a, V´aclava Kettnerov´a, Veronika Kol´aˇrov´a, Marie Mikulov´a, Jiˇr´ı M´ırovsk´y, Anna Nedoluzhko, Jarmila Panevov´a, Lucie Pol´akov´a, Magda ˇSevˇc´ıkov´a, Jan ˇStˇep´anek, and ˇS´arka Zik´anov´a. 2013. Prague Dependency Treebank 3.0. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English web treebank. Marie Candito and Matthieu Constant. 2014. Strategies for Contiguous Multiword Expression Analysis and Dependency Parsing. In ACL 14 - The 52nd Annual Meeting of the Association for Computational Linguistics, Baltimore, United States, June. ACL. Matthieu Constant and Joseph Le Roux. 2015. Dependency Representations for Lexical Segmentation. In 6th Workshop on Statistical Parsing of Morphologically Rich Languages (SPMRL 2015), Bilbao, Spain, July. Matthieu Constant, Marie Candito, and Djam´e Seddah. 2013. The LIGM-Alpage Architecture for the SPMRL 2013 Shared Task: Multiword Expression Analysis and Dependency Parsing. In Fourth Workshop on Statistical Parsing of Morphologically Rich Languages, pages 46–52, Seattle, United States, October. Matthieu Constant, Joseph Le Roux, and Nadi Tomeh. 2016. Deep lexical segmentation and syntactic parsing in the easy-first dependency framework. In Proceeedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2016). Benoˆıt Crabb´e. 2014. An LR-inspired generalized lexicalized phrase structure parser. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland, pages 541–552. G¨uls¸en Eryi˘git, Tugay ˙Ilbay, and Ozan Arkan Can. 2011. Multiword Expressions in Statistical Dependency Parsing. In Proceedings of the Second Workshop on Statistical Parsing of Morphologically Rich 169 Languages, SPMRL ’11, pages 45–55, Stroudsburg, PA, USA. Association for Computational Linguistics. Yoav Goldberg and Joakim Nivre. 2013. Training Deterministic Parsers with Non-Deterministic Oracles. TACL, 1:403–414. Carlos G´omez-Rodr´ıguez and Joakim Nivre. 2013. Divisible Transition Systems and Multiplanar Dependency Parsing. Comput. Linguist., 39(4):799– 845, December. Richard Johansson and Pierre Nugues. 2007. Extended Constituent-to-dependency Conversion for English. In Proceedings of NODALIDA 2007, pages 105–112, Tartu, Estonia, May 25-26. Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. Dependency parsing for tweets. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, October. Association for Computational Linguistics. Sandra K¨ubler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Morgan and Claypool. Joseph Le Roux, Antoine Rozenknop, and Matthieu Constant. 2014. Syntactic Parsing and Compound Recognition via Dual Decomposition: Application to French. In COLING. Alexis Nasr, Carlos Ramisch, Jos´e Deulofeu, and Andr´e Valli. 2015. Joint Dependency Parsing and Multiword Expression Tokenization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1116–1126. Joakim Nivre and Jens Nilsson. 2004. Multiword units in syntactic parsing. Proceedings of Methodologies and Evaluation of Multiword Units in Real-World Applications (MEMURA). Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Dan Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the 10th International Conference on Language Resources and Evaluation. Joakim Nivre. 2004. Incrementality in Deterministic Dependency Parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together (ACL), pages 50–57. Joakim Nivre. 2008. Algorithms for Deterministic Incremental Dependency Parsing. Comput. Linguist., 34(4):513–553, December. Joakim Nivre. 2014. Transition-Based Parsing with Multiword Expressions. In 2nd PARSEME General Meeting, Athens, Greece. Carlos Ramisch. 2015. Multiword Expressions Acquisition: A Generic and Open Framework, volume XIV of Theory and Applications of Natural Language Processing. Springer. Victoria Ros´en, Gyri Smørdal Losnegaard, Koenraad De Smedt, Eduard Bejˇcek, Agata Savary, Adam Przepi´orkowski, Petya Osenova, and Verginica Mititelu. 2015. A survey of multiword expressions in treebanks. In Proceedings of the 14th International Workshop on Treebanks and Linguistic Theories (TLT14). Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword Expressions: A Pain in the Neck for NLP. In Alexander Gelbukh, editor, Computational Linguistics and Intelligent Text Processing, volume 2276 of Lecture Notes in Computer Science, pages 1–15. Springer Berlin Heidelberg. Nathan Schneider, Emily Danchik, Chris Dyer, and Noah A. Smith. 2014a. Discriminative lexical semantic segmentation with gaps: running the MWE gamut. Transactions of the Association for Computational Linguistics (TACL). Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith. 2014b. Comprehensive Annotation of Multiword Expressions in a Social Web Corpus. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation, pages 455–461, Reykjav´ık, Iceland, May. ELRA. Djam´e Seddah, Reut Tsarfaty, Sandra K´’ubler, Marie Candito, Jinho Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepiorkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, Alina Wr´oblewska, and Eric Villemonte de la Cl´ergerie. 2013. Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologically Rich Languages. In Proceedings of the 4th Workshop on Statistical Parsing of Morphologically Rich Languages, Seattle, WA. Violeta Seretan. 2011. Syntax-Based Collocation Extraction. Text, Speech and Language Technology. Springer. Veronika Vincze, Istv´an Nagy T., and Rich´ard Farkas. 2013a. Identifying english and hungarian light verb constructions: A contrastive approach. In Proceedings of the 51st Annual Meeting of the Association 170 for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 2: Short Papers, pages 255–261. Veronika Vincze, J´anos Zsibrita, and Istv`an Nagy T. 2013b. Dependency parsing for identifying hungarian light verb constructions. In Proceedings of International Joint Conference on Natural Language Processing (IJCNLP 2013), Nagoya, Japan. Eric Wehrli. 2014. The Relevance of Collocations for Parsing. In Proceedings of the 10th Workshop on Multiword Expressions (MWE), pages 26–32, Gothenburg, Sweden, April. Association for Computational Linguistics. 171
2016
16
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1693–1703, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation Junyoung Chung Universit´e de Montr´eal [email protected] Kyunghyun Cho New York University Yoshua Bengio Universit´e de Montr´eal CIFAR Senior Fellow Abstract The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru. 1 Introduction The existing machine translation systems have relied almost exclusively on word-level modelling with explicit segmentation. This is mainly due to the issue of data sparsity which becomes much more severe, especially for n-grams, when a sentence is represented as a sequence of characters rather than words, as the length of the sequence grows significantly. In addition to data sparsity, we often have a priori belief that a word, or its segmented-out lexeme, is a basic unit of meaning, making it natural to approach translation as mapping from a sequence of source-language words to a sequence of target-language words. This has continued with the more recently proposed paradigm of neural machine translation, although neural networks do not suffer from character-level modelling and rather suffer from the issues specific to word-level modelling, such as the increased computational complexity from a very large target vocabulary (Jean et al., 2015; Luong et al., 2015b). Therefore, in this paper, we address a question of whether neural machine translation can be done directly on a sequence of characters without any explicit word segmentation. To answer this question, we focus on representing the target side as a character sequence. We evaluate neural machine translation models with a character-level decoder on four language pairs from WMT’15 to make our evaluation as convincing as possible. We represent the source side as a sequence of subwords extracted using byte-pair encoding from Sennrich et al. (2015), and vary the target side to be either a sequence of subwords or characters. On the target side, we further design a novel recurrent neural network (RNN), called biscale recurrent network, that better handles multiple timescales in a sequence, and test it in addition to a naive, stacked recurrent neural network. On all of the four language pairs–En-Cs, En-De, En-Ru and En-Fi–, the models with a characterlevel decoder outperformed the ones with a subword-level decoder. We observed a similar trend with the ensemble of each of these configurations, outperforming both the previous best neural and non-neural translation systems on EnCs, En-De and En-Fi, while achieving a comparable result on En-Ru. We find these results to be a strong evidence that neural machine translation can indeed learn to translate at the character-level and that in fact, it benefits from doing so. 2 Neural Machine Translation Neural machine translation refers to a recently proposed approach to machine translation (Cho et 1693 al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015). This approach aims at building an end-toend neural network that takes as input a source sentence X = (x1, . . . , xTx) and outputs its translation Y = (y1, . . . , yTy), where xt and yt′ are respectively source and target symbols. This neural network is constructed as a composite of an encoder network and a decoder network. The encoder network encodes the input sentence X into its continuous representation. In this paper, we closely follow the neural translation model proposed in Bahdanau et al. (2015) and use a bidirectional recurrent neural network, which consists of two recurrent neural networks. The forward network reads the input sentence in a forward direction: −→z t = −→φ (ex(xt), −→z t−1), where ex(xt) is a continuous embedding of the t-th input symbol, and φ is a recurrent activation function. Similarly, the reverse network reads the sentence in a reverse direction (right to left): ←−z t = ←−φ (ex(xt), ←−z t+1). At each location in the input sentence, we concatenate the hidden states from the forward and reverse RNNs to form a context set C = {z1, . . . , zTx} , where zt = −→z t; ←−z t  . Then the decoder computes the conditional distribution over all possible translations based on this context set. This is done by first rewriting the conditional probability of a translation: log p(Y |X) = PTy t′=1 log p(yt′|y<t′, X). For each conditional term in the summation, the decoder RNN updates its hidden state by ht′ = φ(ey(yt′−1), ht′−1, ct′), (1) where ey is the continuous embedding of a target symbol. ct′ is a context vector computed by a softalignment mechanism: ct′ = falign(ey(yt′−1), ht′−1, C)). (2) The soft-alignment mechanism falign weights each vector in the context set C according to its relevance given what has been translated. The weight of each vector zt is computed by αt,t′ = 1 Z efscore(ey(yt′−1),ht′−1,zt), (3) where fscore is a parametric function returning an unnormalized score for zt given ht′−1 and yt′−1. We use a feedforward network with a single hidden layer in this paper.1 Z is a normalization constant: Z = PTx k=1 efscore(ey(yt′−1),ht′−1,zk). This 1For other possible implementations, see (Luong et al., 2015a). procedure can be understood as computing the alignment probability between the t′-th target symbol and t-th source symbol. The hidden state ht′, together with the previous target symbol yt′−1 and the context vector ct′, is fed into a feedforward neural network to result in the conditional distribution: p(yt′ | y<t′, X) ∝ef yt′ out(ey(yt′−1),ht′,ct′). (4) The whole model, consisting of the encoder, decoder and soft-alignment mechanism, is then tuned end-to-end to minimize the negative loglikelihood using stochastic gradient descent. 3 Towards Character-Level Translation 3.1 Motivation Let us revisit how the source and target sentences (X and Y ) are represented in neural machine translation. For the source side of any given training corpus, we scan through the whole corpus to build a vocabulary Vx of unique tokens to which we assign integer indices. A source sentence X is then built as a sequence of the indices of such tokens belonging to the sentence, i.e., X = (x1, . . . , xTx), where xt ∈{1, 2, . . . , |Vx|}. The target sentence is similarly transformed into a target sequence of integer indices. Each token, or its index, is then transformed into a so-called one-hot vector of dimensionality |Vx|. All but one elements of this vector are set to 0. The only element whose index corresponds to the token’s index is set to 1. This one-hot vector is the one which any neural machine translation model sees. The embedding function, ex or ey, is simply the result of applying a linear transformation (the embedding matrix) to this one-hot vector. The important property of this approach based on one-hot vectors is that the neural network is oblivious to the underlying semantics of the tokens. To the neural network, each and every token in the vocabulary is equal distance away from every other token. The semantics of those tokens are simply learned (into the embeddings) to maximize the translation quality, or the log-likelihood of the model. This property allows us great freedom in the choice of tokens’ unit. Neural networks have been shown to work well with word tokens (Bengio et al., 2001; Schwenk, 2007; Mikolov et al., 2010) but also with finer units, such as subwords (Sennrich et al., 2015; Botha and Blunsom, 2014; Lu1694 ong et al., 2013) as well as symbols resulting from compression/encoding (Chitnis and DeNero, 2015). Although there have been a number of previous research reporting the use of neural networks with characters (see, e.g., Mikolov et al. (2012) and Santos and Zadrozny (2014)), the dominant approach has been to preprocess the text into a sequence of symbols, each associated with a sequence of characters, after which the neural network is presented with those symbols rather than with characters. More recently in the context of neural machine translation, two research groups have proposed to directly use characters. Kim et al. (2015) proposed to represent each word not as a single integer index as before, but as a sequence of characters, and use a convolutional network followed by a highway network (Srivastava et al., 2015) to extract a continuous representation of the word. This approach, which effectively replaces the embedding function ex, was adopted by Costa-Juss`a and Fonollosa (2016) for neural machine translation. Similarly, Ling et al. (2015b) use a bidirectional recurrent neural network to replace the embedding functions ex and ey to respectively encode a character sequence to and from the corresponding continuous word representation. A similar, but slightly different approach was proposed by Lee et al. (2015), where they explicitly mark each character with its relative location in a word (e.g., “B”eginning and “I”ntermediate). Despite the fact that these recent approaches work at the level of characters, it is less satisfying that they all rely on knowing how to segment characters into words. Although it is generally easy for languages like English, this is not always the case. This word segmentation procedure can be as simple as tokenization followed by some punctuation normalization, but also can be as complicated as morpheme segmentation requiring a separate model to be trained in advance (Creutz and Lagus, 2005; Huang and Zhao, 2007). Furthermore, these segmentation2 steps are often tuned or designed separately from the ultimate objective of translation quality, potentially contributing to a suboptimal quality. Based on this observation and analysis, in this paper, we ask ourselves and the readers a question which should have been asked much earlier: Is it 2From here on, the term segmentation broadly refers to any method that splits a given character sequence into a sequence of subword symbols. possible to do character-level translation without any explicit segmentation? 3.2 Why Word-Level Translation? (1) Word as a Basic Unit of Meaning A word can be understood in two different senses. In the abstract sense, a word is a basic unit of meaning (lexeme), and in the other sense, can be understood as a “concrete word as used in a sentence.” (Booij, 2012). A word in the former sense turns into that in the latter sense via a process of morphology, including inflection, compounding and derivation. These three processes do alter the meaning of the lexeme, but often it stays close to the original meaning. Because of this view of words as basic units of meaning (either in the form of lexemes or derived form) from linguistics, much of previous work in natural language processing has focused on using words as basic units of which a sentence is encoded as a sequence. Also, the potential difficulty in finding a mapping between a word’s character sequence and meaning3 has likely contributed to this trend toward word-level modelling. (2) Data Sparsity There is a further technical reason why much of previous research on machine translation has considered words as a basic unit. This is mainly due to the fact that major components in the existing translation systems, such as language models and phrase tables, are a count-based estimator of probabilities. In other words, a probability of a subsequence of symbols, or pairs of symbols, is estimated by counting the number of its occurrences in a training corpus. This approach severely suffers from the issue of data sparsity, which is due to a large state space which grows exponentially w.r.t. the length of subsequences while growing only linearly w.r.t. the corpus size. This poses a great challenge to character-level modelling, as any subsequence will be on average 4–5 times longer when characters, instead of words, are used. Indeed, Vilar et al. (2007) reported worse performance when the character sequence was directly used by a phrase-based machine translation system. More recently, Neubig et al. (2013) proposed a method to improve character-level translation with phrasebased translation systems, however, with only a limited success. 3For instance, “quit”, “quite” and “quiet” are one editdistance away from each other but have distinct meanings. 1695 (3) Vanishing Gradient Specifically to neural machine translation, a major reason behind the wide adoption of word-level modelling is due to the difficulty in modelling long-term dependencies with recurrent neural networks (Bengio et al., 1994; Hochreiter, 1998). As the lengths of the sentences on both sides grow when they are represented in characters, it is easy to believe that there will be more long-term dependencies that must be captured by the recurrent neural network for successful translation. 3.3 Why Character-Level Translation? Why not Word-Level Translation? The most pressing issue with word-level processing is that we do not have a perfect word segmentation algorithm for any one language. A perfect segmentation algorithm needs to be able to segment any given sentence into a sequence of lexemes and morphemes. This problem is however a difficult problem on its own and often requires decades of research (see, e.g., Creutz and Lagus (2005) for Finnish and other morphologically rich languages and Huang and Zhao (2007) for Chinese). Therefore, many opt to using either a rule-based tokenization approach or a suboptimal, but still available, learning based segmentation algorithm. The outcome of this naive, sub-optimal segmentation is that the vocabulary is often filled with many similar words that share a lexeme but have different morphology. For instance, if we apply a simple tokenization script to an English corpus, “run”, “runs”, “ran” and “running” are all separate entries in the vocabulary, while they clearly share the same lexeme “run”. This prevents any machine translation system, in particular neural machine translation, from modelling these morphological variants efficiently. More specifically in the case of neural machine translation, each of these morphological variants– “run”, “runs”, “ran” and “running”– will be assigned a d-dimensional word vector, leading to four independent vectors, while it is clear that if we can segment those variants into a lexeme and other morphemes, we can model them more efficiently. For instance, we can have a d-dimensional vector for the lexeme “run” and much smaller vectors for “s” and“ing”. Each of those variants will be then a composite of the lexeme vector (shared across these variants) and morpheme vectors (shared across words sharing the same suffix, for example) (Botha and Blunsom, 2014). This makes use of distributed representation, which generally yields better generalization, but seems to require an optimal segmentation, which is unfortunately almost never available. In addition to inefficiency in modelling, there are two additional negative consequences from using (unsegmented) words. First, the translation system cannot generalize well to novel words, which are often mapped to a token reserved for an unknown word. This effectively ignores any meaning or structure of the word to be incorporated when translating. Second, even when a lexeme is common and frequently observed in the training corpus, its morphological variant may not be. This implies that the model sees this specific, rare morphological variant much less and will not be able to translate it well. However, if this rare morphological variant shares a large part of its spelling with other more common words, it is desirable for a machine translation system to exploit those common words when translating those rare variants. Why Character-Level Translation? All of these issues can be addressed to certain extent by directly modelling characters. Although the issue of data sparsity arises in character-level translation, it is elegantly addressed by using a parametric approach based on recurrent neural networks instead of a non-parametric count-based approach. Furthermore, in recent years, we have learned how to build and train a recurrent neural network that can well capture long-term dependencies by using more sophisticated activation functions, such as long short-term memory (LSTM) units (Hochreiter and Schmidhuber, 1997) and gated recurrent units (Cho et al., 2014). Kim et al. (2015) and Ling et al. (2015a) recently showed that by having a neural network that converts a character sequence into a word vector, we avoid the issues from having many morphological variants appearing as separate entities in a vocabulary. This is made possible by sharing the character-to-word neural network across all the unique tokens. A similar approach was applied to machine translation by Ling et al. (2015b). These recent approaches, however, still rely on the availability of a good, if not optimal, segmentation algorithm. Ling et al. (2015b) indeed states that “[m]uch of the prior information regarding morphology, cognates and rare word translation 1696 among others, should be incorporated”. It however becomes unnecessary to consider these prior information, if we use a neural network, be it recurrent, convolution or their combination, directly on the unsegmented character sequence. The possibility of using a sequence of unsegmented characters has been studied over many years in the field of deep learning. For instance, Mikolov et al. (2012) and Sutskever et al. (2011) trained a recurrent neural network language model (RNN-LM) on character sequences. The latter showed that it is possible to generate sensible text sequences by simply sampling a character at a time from this model. More recently, Zhang et al. (2015) and Xiao and Cho (2016) successfully applied a convolutional net and a convolutionalrecurrent net respectively to character-level document classification without any explicit segmentation. Gillick et al. (2015) further showed that it is possible to train a recurrent neural network on unicode bytes, instead of characters or words, to perform part-of-speech tagging and named entity recognition. These previous works suggest the possibility of applying neural networks for the task of machine translation, which is often considered a substantially more difficult problem compared to document classification and language modelling. 3.4 Challenges and Questions There are two overlapping sets of challenges for the source and target sides. On the source side, it is unclear how to build a neural network that learns a highly nonlinear mapping from a spelling to the meaning of a sentence. On the target side, there are two challenges. The first challenge is the same one from the source side, as the decoder neural network needs to summarize what has been translated. In addition to this, the character-level modelling on the target side is more challenging, as the decoder network must be able to generate a long, coherent sequence of characters. This is a great challenge, as the size of the state space grows exponentially w.r.t. the number of symbols, and in the case of characters, it is often 300-1000 symbols long. All these challenges should first be framed as questions; whether the current recurrent neural networks, which are already widely used in neural machine translation, are able to address these challenges as they are. In this paper, we aim at an(a) Gating units (b) One-step processing Figure 1: Bi-scale recurrent neural network swering these questions empirically and focus on the challenges on the target side (as the target side shows both of the challenges). 4 Character-Level Translation In this paper, we try to answer the questions posed earlier by testing two different types of recurrent neural networks on the target side (decoder). First, we test an existing recurrent neural network with gated recurrent units (GRUs). We call this decoder a base decoder. Second, we build a novel two-layer recurrent neural network, inspired by the gated-feedback network from Chung et al. (2015), called a biscale recurrent neural network. We design this network to facilitate capturing two timescales, motivated by the fact that characters and words may work at two separate timescales. We choose to test these two alternatives for the following purposes. Experiments with the base decoder will clearly answer whether the existing neural network is enough to handle character-level decoding, which has not been properly answered in the context of machine translation. The alternative, the bi-scale decoder, is tested in order to see whether it is possible to design a better decoder, if the answer to the first question is positive. 4.1 Bi-Scale Recurrent Neural Network In this proposed bi-scale recurrent neural network, there are two sets of hidden units, h1 and h2. They contain the same number of units, i.e., dim(h1) = dim(h2). The first set h1 models a fast-changing timescale (thereby, a faster layer), and h2 a slower timescale (thereby, a slower layer). For each hidden unit, there is an associated gating unit, to which we refer by g1 and g2. For the description below, we use yt′−1 and ct′ for the previous target symbol and the context vector (see Eq. (2)), respectively. 1697 Let us start with the faster layer. The faster layer outputs two sets of activations, a normal output h1 t′ and its gated version ˇh1 t′. The activation of the faster layer is computed by h1 t′ = tanh  Wh1 h ey(yt′−1); ˇh1 t′−1; ˆh2 t′−1; ct′ i , where ˇh1 t′−1 and ˆh2 t′−1 are the gated activations of the faster and slower layers respectively. These gated activations are computed by ˇh1 t′ = (1 −g1 t′) ⊙h1 t′, ˆh2 t′ = g1 t′ ⊙h2 t′. In other words, the faster layer’s activation is based on the adaptive combination of the faster and slower layers’ activations from the previous time step. Whenever the faster layer determines that it needs to reset, i.e., g1 t′−1 ≈1, the next activation will be determined based more on the slower layer’s activation. The faster layer’s gating unit is computed by g1 t′ = σ  Wg1 h ey(yt′−1); ˇh1 t′−1; ˆh2 t′−1; ct′ i , where σ is a sigmoid function. The slower layer also outputs two sets of activations, a normal output h2 t′ and its gated version ˇh2 t′. These activations are computed as follows: h2 t′ = (1 −g1 t′) ⊙h2 t′−1 + g1 t′ ⊙˜h2 t′, ˇh2 t′ = (1 −g2 t′) ⊙h2 t′, where ˜h2 t′ is a candidate activation. The slower layer’s gating unit g2 t′ is computed by g2 t′ =σ  Wg2  (g1 t′ ⊙h1 t′); ˇh2 t′−1; ct′ . This adaptive leaky integration based on the gating unit from the faster layer has a consequence that the slower layer updates its activation only when the faster layer resets. This puts a soft constraint that the faster layer runs at a faster rate by preventing the slower layer from updating while the faster layer is processing a current chunk. The candidate activation is then computed by ˜h2 t′ = tanh  Wh2  (g1 t′ ⊙h1 t′); ˇh2 t′−1; ct′ . (5) ˇh2 t′−1 indicates the reset activation from the previous time step, similarly to what happened in the faster layer, and ct′ is the input from the context. According to g1 t′ ⊙h1 t′ in Eq. (5), the faster layer influences the slower layer, only when the faster Figure 2: (left) The BLEU scores on En-Cs w.r.t. the length of source sentences. (right) The difference of word negative log-probabilities between the subword-level decoder and either of the character-level base or bi-scale decoder. layer has finished processing the current chunk and is about to reset itself (g1 t′ ≈1). In other words, the slower layer does not receive any input from the faster layer, until the faster layer has quickly processed the current chunk, thereby running at a slower rate than the faster layer does. At each time step, the final output of the proposed bi-scale recurrent neural network is the concatenation of the output vectors of the faster and slower layers, i.e.,  h1; h2 . This concatenated vector is used to compute the probability distribution over all the symbols in the vocabulary, as in Eq. (4). See Fig. 1 for graphical illustration. 5 Experiment Settings For evaluation, we represent a source sentence as a sequence of subword symbols extracted by bytepair encoding (BPE, Sennrich et al. (2015)) and a target sentence either as a sequence of BPE-based symbols or as a sequence of characters. Corpora and Preprocessing We use all available parallel corpora for four language pairs from WMT’15: En-Cs, En-De, En-Ru and En-Fi. They consist of 12.1M, 4.5M, 2.3M and 2M sentence pairs, respectively. We tokenize each corpus using a tokenization script included in Moses.4 We only use the sentence pairs, when the source side is up to 50 subword symbols long and the target side is either up to 100 subword symbols or 500 characters. We do not use any monolingual corpus. For all the pairs other than En-Fi, we use newstest-2013 as a development set, and newstest2014 (Test1) and newstest-2015 (Test2) as test sets. For En-Fi, we use newsdev-2015 and newstest2015 as development and test sets, respectively. 4Although tokenization is not necessary for characterlevel modelling, we tokenize the all target side corpora to make comparison against word-level modelling easier. 1698 Src Depth Attention Model Development Test1 Test2 Trgt h1 h2 Single Ens Single Ens Single Ens En-De (a) BPE BPE 1 D Base 20.78 – 19.98 – 21.72 – (b) 2 D D 21.2621.45 20.62 23.49 20.4720.88 19.30 23.10 22.0222.21 21.35 24.83 (c) Char 2 D Base 21.5721.88 20.88 23.14 21.3321.56 19.82 23.11 23.4523.91 21.72 25.24 (d) 2 D D 20.31 – 19.70 – 21.30 – (e) 2 D Bi-S 21.2921.43 21.13 23.05 21.2521.47 20.62 23.04 23.0623.47 22.85 25.44 (f) 2 D D 20.78 – 20.19 – 22.26 – (g) 2 D 20.08 – 19.39 – 20.94 – State-of-the-art Non-Neural Approach∗ – 20.60(1) 24.00(2) En-Cs (h) BPE BPE 2 D D Base 16.1216.96 15.96 19.21 17.1617.68 16.38 20.79 14.6315.09 14.26 17.61 (i) Char 2 D Base 17.6817.78 17.39 19.52 19.2519.55 18.89 21.95 16.9817.17 16.81 18.92 (j) 2 D Bi-S 17.6217.93 17.43 19.83 19.2719.53 19.15 22.15 16.8617.10 16.68 18.93 State-of-the-art Non-Neural Approach∗ – 21.00(3) 18.20(4) En-Ru (k) BPE BPE 2 D D Base 18.5618.70 18.26 21.17 25.3025.40 24.95 29.26 19.7220.29 19.02 22.96 (l) Char 2 D Base 18.5618.87 18.39 20.53 26.0026.07 25.04 29.37 21.1021.24 20.14 23.51 (m) 2 D Bi-S 18.3018.54 17.88 20.53 25.5925.76 24.57 29.26 20.7321.02 19.97 23.75 State-of-the-art Non-Neural Approach∗ – 28.70(5) 24.30(6) En-Fi (n) BPE BPE 2 D D Base 9.6110.02 9.24 11.92 – – 8.979.17 8.88 11.73 (o) Char 2 D Base 11.1911.55 11.09 13.72 – – 10.9311.56 10.11 13.48 (p) 2 D Bi-S 10.7311.04 10.40 13.39 – – 10.2410.63 9.71 13.32 State-of-the-art Non-Neural Approach∗ – – 12.70(7) Table 1: BLEU scores of the subword-level, character-level base and character-level bi-scale decoders for both single models and ensembles. The best scores among the single models per language pair are bold-faced, and those among the ensembles are underlined. When available, we report the median value, and the minimum and maximum values as a subscript and a superscript, respectively. (∗) http: //matrix.statmt.org/ as of 11 March 2016 (constrained only). (1) Freitag et al. (2014). (2, 6) Williams et al. (2015). (3, 5) Durrani et al. (2014). (4) Haddow et al. (2015). (7) Rubino et al. (2015). Models and Training We test three models settings: (1) BPE→BPE, (2) BPE→Char (base) and (3) BPE→Char (bi-scale). The latter two differ by the type of recurrent neural network we use. We use GRUs for the encoder in all the settings. We used GRUs for the decoders in the first two settings, (1) and (2), while the proposed bi-scale recurrent network was used in the last setting, (3). The encoder has 512 hidden units for each direction (forward and reverse), and the decoder has 1024 hidden units per layer. We train each model using stochastic gradient descent with Adam (Kingma and Ba, 2014). Each update is computed using a minibatch of 128 sentence pairs. The norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2013). Decoding and Evaluation We use beamsearch to approximately find the most likely translation given a source sentence. The beam widths are 5 and 15 respectively for the subword-level and character-level decoders. They were chosen based on the translation quality on the development set. The translations are evaluated using BLEU.5 Multilayer Decoder and Soft-Alignment Mechanism When the decoder is a multilayer recurrent neural network (including a stacked network as well as the proposed bi-scale network), the decoder outputs multiple hidden vectors–  h1, . . . , hL for L layers, at a time. This allows an extra degree of freedom in the soft-alignment mechanism (fscore in Eq. (3)). We evaluate using alternatives, including (1) using only hL (slower layer) and (2) using all of them (concatenated). Ensembles We also evaluate an ensemble of neural machine translation models and compare its performance against the state-of-the-art phrasebased translation systems on all four language pairs. We decode from an ensemble by taking the average of the output probabilities at each step. 6 Quantitative Analysis Slower Layer for Alignment On En-De, we test which layer of the decoder should be used 5We used the multi-bleu.perl script from Moses. 1699 Figure 3: Alignment matrix of a test example from En-De using the BPE→Char (bi-scale) model. for computing soft-alignments. In the case of subword-level decoder, we observed no difference between choosing any of the two layers of the decoder against using the concatenation of all the layers (Table 1 (a–b)) On the other hand, with the character-level decoder, we noticed an improvement when only the slower layer (h2) was used for the soft-alignment mechanism (Table 1 (c–g)). This suggests that the soft-alignment mechanism benefits by aligning a larger chunk in the target with a subword unit in the source, and we use only the slower layer for all the other language pairs. Single Models In Table 1, we present a comprehensive report of the translation qualities of (1) subword-level decoder, (2) character-level base decoder and (3) character-level bi-scale decoder, for all the language pairs. We see that the both types of character-level decoder outperform the subword-level decoder for En-Cs and En-Fi quite significantly. On En-De, the character-level base decoder outperforms both the subword-level decoder and the character-level bi-scale decoder, validating the effectiveness of the character-level modelling. On En-Ru, among the single models, the character-level decoders outperform the subword-level decoder, but in general, we observe that all the three alternatives work comparable to each other. These results clearly suggest that it is indeed possible to do character-level translation without explicit segmentation. In fact, what we observed is that character-level translation often surpasses the translation quality of word-level translation. Of course, we note once again that our experiment is restricted to using an unsegmented character sequence at the decoder only, and a further exploration toward replacing the source sentence with an unsegmented character sequence is needed. Ensembles Each ensemble was built using eight independent models. The first observation we make is that in all the language pairs, neural machine translation performs comparably to, or often better than, the state-of-the-art non-neural translation system. Furthermore, the character-level decoders outperform the subword-level decoder in all the cases. 7 Qualitative Analysis (1) Can the character-level decoder generate a long, coherent sentence? The translation in characters is dramatically longer than that in words, likely making it more difficult for a recurrent neural network to generate a coherent sentence in characters. This belief turned out to be false. As shown in Fig. 2 (left), there is no significant difference between the subword-level and character-level decoders, even though the lengths of the generated translations are generally 5–10 times longer in characters. (2) Does the character-level decoder help with rare words? One advantage of character-level modelling is that it can model the composition of any character sequence, thereby better modelling rare morphological variants. We empirically confirm this by observing the growing gap in the average negative log-probability of words between the subword-level and character-level decoders as the frequency of the words decreases. This is shown in Fig. 2 (right) and explains one potential cause behind the success of character-level decoding in our experiments (we define diff(x, y) = x −y). (3) Can the character-level decoder soft-align between a source word and a target character? In Fig. 3 (left), we show an example softalignment of a source sentence, “Two sets of light so close to one another”. It is clear that the character-level translation model well captured the alignment between the source subwords and target characters. We observe that the characterlevel decoder correctly aligns to “lights” and “sets of” when generating a German compound word 1700 “Lichtersets” (see Fig. 3 (right) for the zoomedin version). This type of behaviour happens similarly between “one another” and “einander”. Of course, this does not mean that there exists an alignment between a source word and a target character. Rather, this suggests that the internal state of the character-level decoder, the base or biscale, well captures the meaningful chunk of characters, allowing the model to map it to a larger chunk (subword) in the source. (4) How fast is the decoding speed of the character-level decoder? We evaluate the decoding speed of subword-level base, characterlevel base and character-level bi-scale decoders on newstest2013 corpus (En-De) with a single Titan X GPU. The subword-level base decoder generates 31.9 words per second, and the character-level base decoder and character-level bi-scale decoder generate 27.5 words per second and 25.6 words per second, respectively. Note that this is evaluated in an online setting, performing consecutive translation, where only one sentence is translated at a time. Translating in a batch setting could differ from these results. 8 Conclusion In this paper, we addressed a fundamental question on whether a recently proposed neural machine translation system can directly handle translation at the level of characters without any word segmentation. We focused on the target side, in which a decoder was asked to generate one character at a time, while soft-aligning between a target character and a source subword. Our extensive experiments, on four language pairs–En-Cs, EnDe, En-Ru and En-Fi– strongly suggest that it is indeed possible for neural machine translation to translate at the level of characters, and that it actually benefits from doing so. Our result has one limitation that we used subword symbols in the source side. However, this has allowed us a more fine-grained analysis, but in the future, a setting where the source side is also represented as a character sequence must be investigated. Acknowledgments The authors would like to thank the developers of Theano (Team et al., 2016). We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs, CIFAR and Samsung. KC thanks the support by Facebook, Google (Google Faculty Award 2016) and NVIDIA (GPU Center of Excellence 2015-2016). JC thanks Orhan Firat for his constructive feedbacks. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166. Yoshua Bengio, R´ejean Ducharme, and Pascal Vincent. 2001. A neural probabilistic language model. In Advances in Neural Information Processing Systems, pages 932–938. Geert Booij. 2012. The grammar of words: An introduction to linguistic morphology. Oxford University Press. Jan A Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In ICML 2014. Rohan Chitnis and John DeNero. 2015. Variablelength word encodings for neural translation models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2088–2093. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), October. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2015. Gated feedback recurrent neural networks. In Proceedings of the 32nd International Conference on Machine Learning. Marta R Costa-Juss`a and Jos´e AR Fonollosa. 2016. Character-based neural machine translation. arXiv preprint arXiv:1603.00810. Mathias Creutz and Krista Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Helsinki University of Technology. 1701 Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. 2014. Edinburgh’s phrase-based machine translation systems for wmt-14. In Proceedings of the ACL 2014 Ninth Workshop on Statistical Machine Translation, Baltimore, MD, USA, pages 97–104. Markus Freitag, Stephan Peitz, Joern Wuebker, Hermann Ney, Matthias Huck, Rico Sennrich, Nadir Durrani, Maria Nadejde, Philip Williams, Philipp Koehn, et al. 2014. Eu-bridge mt: Combined machine translation. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing from bytes. arXiv preprint arXiv:1512.00103. Barry Haddow, Matthias Huck, Alexandra Birch, Nikolay Bogoychev, and Philipp Koehn. 2015. The edinburgh/jhu phrase-based machine translation systems for wmt 2015. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 126– 133. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Sepp Hochreiter. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02):107–116. Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing, 21(3):8–20. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2015. Character-aware neural language models. arXiv preprint arXiv:1508.06615. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Hyoung-Gyu Lee, JaeSong Lee, Jun-Seok Kim, and Chang-Ki Lee. 2015. Naver machine translation system for wat 2015. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015), pages 69–73. Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015b. Character-based neural machine translation. arXiv preprint arXiv:1511.04586. Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104–113. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015a. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, volume 2, page 3. Tomas Mikolov, Ilya Sutskever, Anoop Deoras, HaiSon Le, Stefan Kombrink, and J Cernocky. 2012. Subword language modeling with neural networks. Preprint. Graham Neubig, Taro Watanabe, Shinsuke Mori, and Tatsuya Kawahara. 2013. Substring-based machine translation. Machine translation, 27(2):139–166. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2013. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026. Raphael Rubino, Tommi Pirinen, Miquel Espla-Gomis, N Ljubeˇsic, Sergio Ortiz Rojas, Vassilis Papavassiliou, Prokopis Prokopidis, and Antonio Toral. 2015. Abu-matran at wmt 2015 translation task: Morphological segmentation and web crawling. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 184–191. Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1818–1826. Holger Schwenk. 2007. Continuous space language models. Computer Speech & Language, 21(3):492– 518. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems, pages 2368–2376. 1702 Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML’11), pages 1017–1024. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr´ed´eric Bastien, Justin Bayer, Anatoly Belikov, et al. 2016. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688. David Vilar, Jan-T Peter, and Hermann Ney. 2007. Can we translate letters? In Proceedings of the Second Workshop on Statistical Machine Translation, pages 33–39. Association for Computational Linguistics. Philip Williams, Rico Sennrich, Maria Nadejde, Matthias Huck, and Philipp Koehn. 2015. Edinburgh’s syntax-based systems at wmt 2015. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 199–209. Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, pages 649–657. 1703
2016
160
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1704–1714, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Target-Side Context for Discriminative Models in Statistical Machine Translation Aleˇs Tamchyna1,2, Alexander Fraser1, Ondˇrej Bojar2 and Marcin Junczys-Dowmunt3 1LMU Munich, Munich, Germany 2Charles University in Prague, Prague, Czech Republic 3Adam Mickiewicz University in Pozna´n, Pozna´n, Poland {tamchyna,bojar}@ufal.mff.cuni.cz [email protected] [email protected] Abstract Discriminative translation models utilizing source context have been shown to help statistical machine translation performance. We propose a novel extension of this work using target context information. Surprisingly, we show that this model can be efficiently integrated directly in the decoding process. Our approach scales to large training data sizes and results in consistent improvements in translation quality on four language pairs. We also provide an analysis comparing the strengths of the baseline source-context model with our extended source-context and targetcontext model and we show that our extension allows us to better capture morphological coherence. Our work is freely available as part of Moses. 1 Introduction Discriminative lexicons address some of the core challenges of phrase-based MT (PBMT) when translating to morphologically rich languages, such as Czech, namely sense disambiguation and morphological coherence. The first issue is semantic: given a source word or phrase, which of its possible meanings (i.e., which stem or lemma) should we choose? Previous work has shown that this can be addressed using a discriminative lexicon. The second issue has to do with morphology (and syntax): given that we selected the correct meaning, which of its inflected surface forms is appropriate? In this work, we integrate such a model directly into the SMT decoder. This enables our classifier to extract features not only from the full source sentence but also from a limited targetside context. This allows the model to not only help with semantics but also to improve morphological and syntactic coherence. For sense disambiguation, source context is the main source of information, as has been shown in previous work (Vickrey et al., 2005), (Carpuat and Wu, 2007), (Gimpel and Smith, 2008) inter alia. Consider the first set of examples in Figure 1, produced by a strong baseline PBMT system. The English word “shooting” has multiple senses when translated into Czech: it may either be the act of firing a weapon or making a film. When the cue word “film” is close, the phrase-based model is able to use it in one phrase with the ambiguous “shooting”, disambiguating correctly the translation. When we add a single word in between, the model fails to capture the relationship and the most frequent sense is selected instead. Wider source context information is required for correct disambiguation. While word/phrase senses can usually be inferred from the source sentence, the correct selection of surface forms requires also information from the target. Note that we can obtain some information from the source. For example, an English subject is often translated into a Czech subject; in which case the Czech word should be in nominative case. But there are many decisions that happen during decoding which determine morphological and syntactic properties of words – verbs can have translations which differ in valency frames, they may be translated in either active or passive voice (in which case subject and object would be switched), nouns may have different possible translations which differ in gender, etc. The correct selection of surface forms plays a crucial role in preserving meaning in morphologically rich languages because it is morphology rather than word order that expresses relations between words. (Word order tends to be 1704 Input PBMT Output shooting of the film . nat´aˇcen´ı filmu .  shootingcamera of film . shooting of the expensive film . stˇrelby na drah´y film .  shootingsgun at expensive film . the man saw a cat . muˇz uvidˇel koˇcku .  man saw catacc . the man saw a black cat . muˇz spatˇril ˇcernou koˇcku .  man saw blackacc catacc . the man saw a yellowish cat . muˇz spatˇril naˇzloutl´a koˇcka .  man saw yellowishnom catnom . Figure 1: Examples of problems of PBMT: lexical selection and morphological coherence. Each translation has a corresponding gloss in italics. relatively free and driven more by semantic constraints rather than syntactic constraints.) The language model is only partially able to capture this phenomenon. It has a limited scope and perhaps more seriously, it suffers from data sparsity. The units captured by both the phrase table and the LM are mere sequences of words. In order to estimate their probability, we need to observe them in the training data (many times, if the estimates should be reliable). However, the number of possible n-grams grows exponentially as we increase n, leading to unrealistic requirements on training data sizes. This implies that the current models can (and often do) miss relationships between words even within their theoretical scope. The second set of sentences in Figure 1 demonstrates the problem of data sparsity for morphological coherence. While the phrase-based system can correctly transfer the morphological case of “cat” and even “black cat”, the less usual “yellowish cat” is mistranslated into nominative case, even though the correct phrase “yellowish ||| naˇzloutlou” exists in the phrase table. A model with a suitable representation of two preceding words could easily infer the correct case in this example. Our contributions are the following: • We show that the addition of a feature-rich discriminative model significantly improves translation quality even for large data sizes and that target-side context information consistently further increases this improvement. • We provide an analysis of the outputs which confirms that source-context features indeed help with semantic disambiguation (as is well known). Importantly, we also show that our novel use of target context improves morphological and syntactic coherence. • In addition to extensive experimentation on translation from English to Czech, we also evaluate English to German, English to Polish and English to Romanian tasks, with improvements on translation quality in all tasks, showing that our work is broadly applicable. • We describe several optimizations which allow target-side features to be used efficiently in the context of phrase-based decoding. • Our implementation is freely available in the widely used open-source MT toolkit Moses, enabling other researchers to explore discriminative modelling with target context in MT. 2 Discriminative Model with Target-Side Context Several different ways of using feature-rich models in MT have been proposed, see Section 6. We describe our approach in this section. 2.1 Model Definition Let f be the source sentence and e its translation. We denote source-side phrases (given a particular phrasal segmentation) ( ¯f1, . . . , ¯ fm) and the individual words (f1, . . . , fn). We use a similar notation for target-side words/phrases. For simplicity, let eprev, eprev−1 denote the words preceding the current target phrase. Assuming target context size of two, we model the following probability distribution: 1705 P(e|f) ∝ Y ( ¯ei, ¯fi)∈(e,f) P(¯ei|¯fi, f, eprev, eprev−1) (1) The probability of a translation is the product of phrasal translation probabilities which are conditioned on the source phrase, the full source sentence and several previous target words. Let GEN(¯fi) be the set of possible translations of the source phrase ¯fi according to the phrase table. We also define a “feature vector” function fv(¯ei, ¯fi, f, eprev, eprev−1) which outputs a vector of features given the phrase pair and its context information. We also have a vector of feature weights w estimated from the training data. Then our model defines the phrasal translation probability simply as follows: P(¯ei|¯fi, f, eprev, eprev−1) = exp(w · fv(¯ei, ¯fi, f, eprev, eprev−1)) P ¯e′∈GEN( ¯fi) exp(w · fv(¯e′, ¯fi, f, eprev, eprev−1)) (2) This definition implies that we have to locally normalize the classifier outputs so that they sum to one. In PBMT, translations are usually scored by a log-linear model. Our classifier produces a single score (the conditional phrasal probability) which we add to the standard log-linear model as an additional feature. The MT system therefore does not have direct access to the classifier features, only to the final score. 2.2 Global Model We use the Vowpal Wabbit (VW) classifier1 in this work. Tamchyna et al. (2014) already integrated VW into Moses. We started from their implementation in order to carry out our work. Classifier features are divided into two “namespaces”: • S. Features that do not depend on the current phrasal translation (i.e., source- and targetcontext features). • T. Features of the current phrasal translation. We make heavy use of feature processing available in VW, namely quadratic feature expansions 1http://hunch.net/˜vw/ and label-dependent features. When generating features for a particular set of translations, we first create the shared features (in the namespace S). These only depend on (source and target) context and are therefore constant for all possible translations of a given phrase. (Note that target-side context naturally depends on the current partial translation. However, when we process the possible translations for a single source phrase, the target context is constant.) Then for each translation, we extract its features and store them in the namespace T. Note that we do not provide a label (or class) to VW – it is up to these translation features to describe the target phrase. (And this is what is referred to as “labeldependent features” in VW.) Finally, we add the Cartesian product between the two namespaces to the feature set: every shared feature is combined with every translation feature. This setting allows us to train only a single, global model with powerful feature sharing. For example, thanks to the label-dependent format, we can decompose both the source phrase and the target phrase into words and have features such as s cat t koˇcka which capture phrase-internal word translations. Predictions for rare phrase pairs are then more robust thanks to the rich statistics collected for these word-level feature pairs. 2.3 Extraction of Training Examples Discriminative models in MT are typically trained by creating one training instance per extracted phrase from the entire training data. The target side of the extracted phrase is a positive label, and all other phrases observed aligned to the extracted phrase (anywhere in the training data) are the negative labels. We train our model in a similar fashion: for each sentence in the parallel training data, we look at all possible phrasal segmentations. Then for each source span, we create a training example. We obtain the set of possible translations GEN( ¯f) from the phrase table. Because we do not have actual classes, each translation is defined by its labeldependent features and we associate a loss with it: 0 loss for the correct translation and 1 for all others. Because we train both our model and the standard phrase table on the same dataset, we use leaving-one-out in the classifier training to avoid 1706 Feature Type Configurations Czech German Polish, Romanian Source Indicator f, l, l+t, t f, l, l+t, t l, t Source Internal f, f+a, f+p, l, l+t, t, a+p f, f+a, f+p, l, l+t, t, a+p l, l+a, l+p, t, a+p Source Context f (-3,3), l (-3,3), t (-5,5) f (-3,3), l (-3,3), t (-5,5) l (-3,3), t (-5,5) Target Context f (2), l (2), t (2), l+t (2) f (2), l (2), t (2), l+t (2) l (2), t (2) Bilingual Context — l+t/l+t (2) l+t/l+t (2) Target Indicator f, l, t f, l, t l, t Target Internal f, l, l+t, t f, l, l+t, t l, t Table 1: List of used feature templates. Letter abbreviations refer to word factors: f (form), l (lemma), t (morphological tag), a (analytical function), p (lemma of dependency parent). Numbers in parentheses indicate context size. over-fitting. We look at phrase counts and cooccurrence counts in the training data, we subtract one from the number of occurrences for the current source phrase, target phrase and the phrase pair. If the count goes to zero, we skip the training example. Without this technique, the classifier might learn to simply trust very long phrase pairs which were extracted from the same training sentence. For target-side context features, we simply use the true (gold) target context. This leads to training which is similar to language model estimation; this model is somewhat similar to the neural joint model for MT (Devlin et al., 2014), but in our case implemented using a linear (maximumentropy-like) model. 2.4 Training We use Vowpal Wabbit in the --csoaa ldf mc setting which reduces our multi-class problem to one-against-all binary classification. We use the logistic loss as our objective. We experimented with various settings of L2 regularization but were not able to get an improvement over not using regularization at all. We train each model with 10 iterations over the data. We evaluate all of our models on a held-out set. We use the same dataset as for MT system tuning because it closely matches the domain of our test set. We evaluate model accuracy after each pass over the training data to detect over-fitting and we select the model with the highest held-out accuracy. 2.5 Feature Set Our feature set requires some linguistic processing of the data. We use the factored MT setting (Koehn and Hoang, 2007) and we represent each type of information as an individual factor. On the source side, we use the word surface form, its lemma, morphological tag, analytical function (such as Subj for subjects) and the lemma of the parent node in the dependency parse tree. On the target side, we only use word lemmas and morphological tags. Table 1 lists our feature sets for each language pair. We implemented indicator features for both the source and target side; these are simply concatenations of the words in the current phrase into a single feature. Internal features describe words within the current phrase. Context features are extracted either from a window of a fixed size around the current phrase (on the source side) or from a limited left-hand side context (on the target side). Bilingual context features are concatenations of target-side context words and their sourceside counterparts (according to word alignment); these features are similar to bilingual tokens in bilingual LMs (Niehues et al., 2011). Each of our feature types can be configured to look at any individual factors or their combinations. The features in Table 1 are divided into three sets. The first set contains label-independent (=shared) features which only depend on the source sentence. The second set contains shared features which depend on target-side context; these can only be used when VW is applied during decoding. We use target context size two in all our experiments.2 Finally, the third set contains label-dependent features which describe the currently predicted phrasal translation. 2In preliminary experiments we found that using a single word was less effective and larger context did not bring improvements, possibly because of over-fitting. 1707 Going back to the examples from Figure 1, our model can disambiguate the translation of “shooting” based on the source-context features (either the full form or lemma). For the morphological disambiguation of the translation of “yellowish cat”, the model has access to the morphological tags of the preceding target words which can disambiguate the correct morphological case. We used slightly different subsets of the full feature set for different languages. In particular, we left out surface form features and/or bilingual features in some settings because they decreased performance, presumably due to over-fitting. 3 Efficient Implementation Originally, we assumed that using target-side context features in decoding would be too expensive, considering that we would have to query our model roughly as often as the language model. In preliminary experiments, we therefore focused on n-best list re-ranking. We obtained small gains but all of our results were substantially worse than with the integrated model, so we omit them from the paper. We find that decoding with a feature-rich targetcontext model is in fact feasible. In this section, we describe optimizations at different stages of our pipeline which make training and inference with our model practical. 3.1 Feature Extraction We implemented the code for feature extraction only once; identical code is used at training time and in decoding. At training time, the generated features are written into a file whereas at test time, they are fed directly into the classifier via its library interface. This design decision not only ensures consistency in feature representation but also makes the process of feature extraction efficient. In training, we are easily able to use multi-threading (already implemented in Moses) and because the processing of training data is a trivially parallel task, we can also use distributed computation and run separate instances of (multi-threaded) Moses on several machines. This enables us to easily produce training files from millions of parallel sentences within a short time. 3.2 Model Training VW is a very fast classifier by itself, however for very large data, its training can be further sped up by using parallelization. We take advantage of its implementation of the AllReduce scheme which we utilize in a grid engine environment. We shuffle and shard the data and then assign each shard to a worker job. With AllReduce, there is a master job which synchronizes the learned weight vector with all workers. We have compared this approach with the standard single-threaded, single-process training and found that we obtain identical model accuracy. We usually use around 10-20 training jobs. This way, we can process our large training files quickly and train the full model (using multiple passes over the data) within hours; effectively, neither feature extraction nor model training become a significant bottleneck in the full MT system training pipeline. 3.3 Decoding In phrase-based decoding, translation is generated from left to right. At each step, a partial translation (initially empty) is extended by translating a previously uncovered part of the source sentence. There are typically many ways to translate each source span, which we refer to as translation options. The decoding process gradually extends the generated partial translations until the whole source sentence is covered; the final translation is then the full translation hypothesis with the highest model score. Various pruning strategies are applied to make decoding tractable. Evaluating a feature-rich classifier during decoding is a computationally expensive operation. Because the features in our model depend on target-side context, the feature function which computes the classifier score cannot evaluate the translation options in isolation (independently of the partial translation). Instead, similarly to a language model, it needs to look at previously generated words. This also entails maintaining a state which captures the required context information. A naive integration of the classifier would simply generate all source-context features, all targetcontext features and all features describing the translation option each time a partial hypothesis is evaluated. This is a computationally very expensive approach. We instead propose several technical solutions 1708 which make decoding reasonably fast. Decoding a single sentence with the naive approach takes 13.7 seconds on average. With our optimization, this average time is reduced to 2.9 seconds, i.e. almost by 80 per cent. The baseline system produces a translation in 0.8 seconds on average. Separation of source-context and targetcontext evaluation. Because we have a linear model, the final score is simply the dot product between a weight vector and a (sparse) feature vector. It is therefore trivial to separate it into two components: one that only contains features which depend on the source context and the other with target context features. We can pre-compute the source-context part of the score before decoding (once we have all translation options for the given sentence). We cache these partial scores and when the translation option is evaluated, we add the partial score of the target-context features to arrive at the final classifier score. Caching of feature hashes. VW uses feature hashing internally and it is possible to obtain the hash of any feature that we use. When we encounter a previously unseen target context (=state) during decoding, we store the hashes of extracted features in a cache. Therefore for each context, we only run the expensive feature extraction once. Similarly, we pre-compute feature hash vectors for all translation options. Caching of final results. Our classifier locally normalizes the scores so that the probabilities of translations for a given span sum to one. This cannot be done without evaluating all translation options for the span at the same time. Therefore, when we get a translation option to be scored, we fetch all translation options for the given source span and evaluate all of them. We then normalize the scores and add them to a cache of final results. When the other translation options come up, their scores are simply fetched from the cache. This can also further save computation when we get into a previously seen state (from the point of view of our classifier) and we evaluate the same set of translation options in that state; we will simply find the result in cache in such cases. When we combine all of these optimizations, we arrive at the query algorithm shown in Figure 2. 4 Experimental Evaluation We run the main set of experiments on English to Czech translation. To verify that our method is function EVALUATE(t, s) span = t.getSourceSpan() if not resultCache.has(span, s) then scores = () if not stateCache.has(s) then stateCache[s] = CtxFeatures(s) end if for all t′ ←span.tOpts() do srcScore = srcScoreCache[t′] c.addFeatures(stateCache[s]) c.addFeatures(translationCache[t′]) tgtScore = c.predict() scores[t′] = srcScore + tgtScore end for normalize(scores) resultCache[span, s] = scores end if return resultCache[span, s][t] end function Figure 2: Algorithm for obtaining classifier predictions during decoding. The variable t stands for the current translation, s is the current state and c is an instance of the classifier. applicable to other language pairs, we also present experiments in English to German, Polish, and Romanian. In all experiments, we use Treex (Popel and ˇZabokrtsk´y, 2010) to lemmatize and tag the source data and also to obtain dependency parses of all English sentences. 4.1 English-Czech Translation As parallel training data, we use (subsets of) the CzEng 1.0 corpus (Bojar et al., 2012). For tuning, we use the WMT13 test set (Bojar et al., 2013) and we evaluate the systems on the WMT14 test set (Bojar et al., 2014). We lemmatize and tag the Czech data using Morphodita (Strakov´a et al., 2014). Our baseline system is a standard phrase-based Moses setup. The phrase table in both cases is factored and outputs also lemmas and morphological tags. We train a 5-gram LM on the target side of parallel data. We evaluate three settings in our experiments: • baseline – vanilla phrase-based system, • +source – our classifier with source-context features only, 1709 • +target – our classifier with both sourcecontext and target-context features. For each of these settings, we vary the size of the training data for our classifier, the phrase table and the LM. We experiment with three different sizes: small (200 thousand sentence pairs), medium (5 million sentence pairs), and full (the whole CzEng corpus, over 14.8 million sentence pairs). For each setting, we run system weight optimization (tuning) using minimum error rate training (Och, 2003) five times and report the average BLEU score. We use MultEval (Clark et al., 2011) to compare the systems and to determine whether the differences in results are statistically significant. We always compare the baseline with +source and +source with +target. Table 2 shows the obtained results. Statistically significant differences (α=0.01) are marked in bold. The source-context model does not help in the small data setting but brings a substantial improvement of 0.7-0.8 BLEU points for the medium and full data settings, which is an encouraging result. Target-side context information allows our model to push the translation quality further: even for the small data setting, it brings a substantial improvement of 0.5 BLEU points and the gain remains significant as the data size increases. Even in the full data setting, target-side features improve the score by roughly 0.2 BLEU points. Our results demonstrate that feature-rich models scale to large data size both in terms of technical feasibility and of translation quality improvements. Target side information seems consistently beneficial, adding further 0.2-0.5 BLEU points on top of the source-context model. data size small medium full baseline 10.7 15.2 16.7 +source 10.7 16.0 17.3 +target 11.2 16.4 17.5 Table 2: BLEU scores obtained on the WMT14 test set. We report the performance of the baseline, the source-context model and the full model. Intrinsic Evaluation. For completeness, we report intrinsic evaluation results. We evaluate the classifier on a held-out set (WMT13 test set) by extracting all phrase pairs from the test input aligned with the test reference (similarly as we would in training) and scoring each phrase pair (along with other possible translations of the source phrase) with our classifier. An instance is classified correctly if the true translation obtains the highest score by our model. A baseline which always chooses the most frequent phrasal translation obtains accuracy of 51.5. For the sourcecontext model, the held-out accuracy was 66.3, while the target context model achieved accuracy of 74.8. Note that this high difference is somewhat misleading because in this setting, the targetcontext model has access to the true target context (i.e., it is cheating). 4.2 Additional Language Pairs We experiment with translation from English into German, Polish, and Romanian. Our English-German system is trained on the data available for the WMT14 translation task: Europarl (Koehn, 2005) and the Common Crawl corpus,3 roughly 4.3 million sentence pairs altogether. We tune the system on the WMT13 test set and we test on the WMT14 set. We use TreeTagger (Schmid, 1994) to lemmatize and tag the German data. English-Polish has not been included in WMT shared tasks so far, but was present as a language pair for several IWSLT editions which concentrate on TED talk translation. Full test sets are only available for 2010, 2011, and 2012. The references for 2013 and 2014 were not made public. We use the development set and test set from 2010 as development data for parameter tuning. The remaining two test sets (2011, 2012) are our test data. We train on the concatenation of Europarl and WIT3 (Cettolo et al., 2012), ca. 750 thousand sentence pairs. The Polish half has been tagged using WCRFT (Radziszewski, 2013) which produces full morphological tags compatible with the NKJP tagset (Przepi´orkowski, 2009). English-Romanian was added in WMT16. We train our system using the available parallel data – Europarl and SETIMES2 (Tiedemann, 2009), roughly 600 thousand sentence pairs. We tune the English-Romanian system on the official development set and we test on the WMT16 test set. We use the online tagger by Tufis et al. (2008) to preprocess the data. Table 3 shows the obtained results. Similarly to English-Czech experiments, BLEU scores are av3http://commoncrawl.org/ 1710 input: the most intensive mining took place there from 1953 to 1962 . baseline: nejv´ıce intenzivn´ı tˇeˇzba doˇslo tam z roku 1953 , aby 1962 . the most intensive miningnom there occurred there from 1953 , in order to 1962 . +source: nejv´ıce intenzivn´ı tˇeˇzby m´ısto tam z roku 1953 do roku 1962 . the most intensive mininggen place there from year 1953 until year 1962 . +target: nejv´ıce intenzivn´ı tˇeˇzba prob´ıhala od roku 1953 do roku 1962 . the most intensive miningnom occurred from year 1953 until year 1962 . Figure 3: An example sentence from the test set. Each translation has a corresponding gloss in italics. Errors are marked in bold. language de pl (2011) pl (2012) ro baseline 15.7 12.8 10.4 19.6 +target 16.2 13.4 11.1 20.2 Table 3: BLEU scores of the baseline and of the full model for English to German, Polish, and Romanian. eraged over 5 independent optimization runs. Our system outperforms the baseline by 0.5-0.7 BLEU points in all cases, showing that the method is applicable to other languages with rich morphology. 5 Analysis We manually analyze the outputs of EnglishCzech systems. Figure 3 shows an example sentence from the WMT14 test set translated by all the system variants. The baseline system makes an error in verb valency; the Czech verb “doˇslo” could be used but this verb already has an (implicit) subject and the translation of “mining” (“tˇeˇzba”) would have to be in a different case and at a different position in the sentence. The second error is more interesting, however: the baseline system fails to correctly identify the word sense of the particle “to” and translates it in the sense of purpose, as in “in order to”. The source-context model takes the context (span of years) into consideration and correctly disambiguates the translation of “to”, choosing the temporal meaning. It still fails to translate the main verb correctly, though. Only the full model with target-context information is able to also correctly translate the verb and inflect its arguments according to their roles in the valency frame. The translation produced by this final system in this case is almost flawless. In order to verify that the automatically measured results correspond to visible improvements in translation quality, we carried out two annotation experiments. We took a random sample of 104 sentences from the test set and blindly ranked two competing translations (the selection of sentences was identical for both experiments). In the first experiment, we compared the baseline system with +source. In the other experiment, we compared the baseline with +target. The instructions for annotation were simply to compare overall translation quality; we did not ask the annotator to look for any specific phenomena. In terms of automatic measures, our selection has similar characteristics as the full test set: BLEU scores obtained on our sample are 15.08, 16.22 and 16.53 for the baseline, +source and +target respectively. In the first case, the annotator marked 52 translations as equal in quality, 26 translations produced by +source were marked as better and in the remaining 26 cases, the baseline won the ranking. Even though there is a difference in BLEU, human annotation does not confirm this measurement, ranking both systems equally. In the second experiment, 52 translations were again marked as equal. In 34 cases, +target produced a better translation while in 18 cases, the baseline output won. The difference between the baseline and +target suggests that the targetcontext model may provide information which is useful for translation quality as perceived by humans. Our overall impression from looking at the system outputs was that both the source-context and target-context model tend to fix many morphosyntactic errors. Interestingly, we do not observe as many improvements in the word/phrase sense disambiguation, though the source context does help semantics in some sentences. The targetcontext model tends to preserve the overall agreement and coherence better than the system with a source-context model only. We list several such examples in Figure 4. Each of them is fully cor1711 input: destruction of the equipment means that Syria can no longer produce new chemical weapons . +source: zniˇcen´ım zaˇr´ızen´ı znamen´a , ˇze S´yrie jiˇz nem˚uˇze vytv´aˇret nov´e chemick´e zbranˇe . destruction ofinstr equipment means , that Syria already cannot produce new chemical weapons . +target: zniˇcen´ı zaˇr´ızen´ı znamen´a , ˇze S´yrie jiˇz nem˚uˇze vytv´aˇret nov´e chemick´e zbranˇe . destruction ofnom equipment means , that Syria already cannot produce new chemical weapons . input: nothing like that existed , and despite that we knew far more about each other . +source: nic takov´eho neexistovalo , a pˇresto jsme vˇedˇeli daleko v´ıc o jeden na druh´eho . nothing like that existed , and despite that we knew far more about onenom on other . +target: nic takov´eho neexistovalo , a pˇresto jsme vˇedˇeli daleko v´ıc o sobˇe navz´ajem . nothing like that existed , and despite that we knew far more about each other . input: the authors have been inspired by their neighbours . +source: autoˇri byli inspirov´ani sv´ych soused˚u . the authors have been inspired theirgen neighboursgen . +target: autoˇri byli inspirov´ani sv´ymi sousedy . the authors have been inspired theirinstr neighboursinstr . Figure 4: Example sentences from the test set showing improvements in morphological coherence. Each translation has a corresponding gloss in italics. Errors are marked in bold. rected by the target-context model, producing an accurate translation of the input. 6 Related Work Discriminative models in MT have been proposed before. Carpuat and Wu (2007) trained a maximum entropy classifier for each source phrase type which used source context information to disambiguate its translations. The models did not capture target-side information and they were independent; no parameters were shared between classifiers for different phrases. They used a strong feature set originally developed for word sense disambiguation. Gimpel and Smith (2008) also used wider source-context information but did not train a classifier; instead, the features were included directly in the log-linear model of the decoder. Mauser et al. (2009) introduced the “discriminative word lexicon” and trained a binary classifier for each target word, using as features only the bag of words (from the whole source sentence). Training sentences where the target word occurred were used as positive examples, other sentences served as negative examples. Jeong et al. (2010) proposed a discriminative lexicon with a rich feature set tailored to translation into morphologically rich languages; unlike our work, their model only used source-context features. Subotin (2011) included target-side context information in a maximum-entropy model for the prediction of morphology. The work was done within the paradigm of hierarchical PBMT and assumes that cube pruning is used in decoding. Their algorithm was tailored to the specific problem of passing non-local information about morphological agreement required by individual rules (such as explicit rules enforcing subject-verb agreement). Our algorithm only assumes that hypotheses are constructed left to right and provides a general way for including target context information in the classifier, regardless of the type of features. Our implementation is freely available and can be further extended by other researchers in the future. 7 Conclusions We presented a discriminative model for MT which uses both source and target context information. We have shown that such a model can be used directly during decoding in a relatively efficient way. We have shown that this model consistently significantly improves the quality of English-Czech translation over a strong baseline with large training data. We have validated the effectiveness of our model on several additional language pairs. We have provided an analysis showing concrete examples of improved lexical selection and morphological coherence. Our work is available in the main branch of Moses for use by other researchers. Acknowledgements This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements no. 644402 (HimL) and 645452 (QT21), from the European Research Council (ERC) under grant agreement no. 640550, and from the SVV project number 260 333. This work has been using language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071). 1712 References Ondˇrej Bojar, Zdenˇek ˇZabokrtsk´y, Ondˇrej Duˇsek, Petra Galuˇsˇc´akov´a, Martin Majliˇs, David Mareˇcek, Jiˇr´ı Marˇs´ık, Michal Nov´ak, Martin Popel, and Aleˇs Tamchyna. 2012. The Joy of Parallelism with CzEng 1.0. In Proc. of LREC, pages 3921–3928. ELRA. Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1–44, Sofia, Bulgaria, August. Association for Computational Linguistics. Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, MD, USA. Association for Computational Linguistics. Marine Carpuat and Dekai Wu. 2007. Improving statistical machine translation using word sense disambiguation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Prague, Czech Republic. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pages 261–268, Trento, Italy, May. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, HLT ’11, pages 176–181, Stroudsburg, PA, USA. Association for Computational Linguistics. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M. Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 1370–1380. K. Gimpel and N. A. Smith. 2008. Rich Source-Side Context for Statistical Machine Translation. Columbus, Ohio. Minwoo Jeong, Kristina Toutanova, Hisami Suzuki, and Chris Quirk. 2010. A discriminative lexicon model for complex morphology. In The Ninth Conference of the Association for Machine Translation in the Americas. Association for Computational Linguistics, November. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 868–876, Prague, Czech Republic, June. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. AAMT, AAMT. Arne Mauser, Sasa Hasan, and Hermann Ney. 2009. Extending Statistical Machine Translation with Discriminative and Trigger-Based Lexicon Models. pages 210–218, Suntec, Singapore. Jan Niehues, Teresa Herrmann, Stephan Vogel, and Alex Waibel. 2011. Wider Context by Using Bilingual Language Models in Machine Translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 198–206, Edinburgh, Scotland, July. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of ACL, pages 160–167, Sapporo, Japan. ACL. Martin Popel and Zdenˇek ˇZabokrtsk´y. 2010. TectoMT: Modular NLP Framework. In Hrafn Loftsson, Eirikur R¨ognvaldsson, and Sigrun Helgadottir, editors, IceTAL 2010, volume 6233 of Lecture Notes in Computer Science, pages 293–304. Iceland Centre for Language Technology (ICLT), Springer. Adam Przepi´orkowski. 2009. A comparison of two morphosyntactic tagsets of Polish. In Violetta Koseska-Toszewa, Ludmila Dimitrova, and Roman Roszko, editors, Representing Semantics in Digital Lexicography: Proceedings of MONDILEX Fourth Open Workshop, pages 138–144, Warsaw. Adam Radziszewski. 2013. A tiered CRF tagger for Polish. In Robert Bembenik, Lukasz Skonieczny, Henryk Rybinski, Marzena Kryszkiewicz, and Marek Niezgodka, editors, Intelligent Tools for Building a Scientific Information Platform, volume 467 of Studies in Computational Intelligence, pages 215–230. Springer. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In International Conference on New Methods in Language Processing, pages 44–49, Manchester, UK. Jana Strakov´a, Milan Straka, and Jan Hajiˇc. 2014. Open-Source Tools for Morphology, Lemmatization, POS Tagging and Named Entity Recognition. 1713 In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 13–18, Baltimore, Maryland, June. Association for Computational Linguistics. Michael Subotin. 2011. An exponential translation model for target language morphology. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 230–238. Aleˇs Tamchyna, Fabienne Braune, Alexander Fraser, Marine Carpuat, Hal Daum´e III, and Chris Quirk. 2014. Integrating a discriminative classifier into phrase-based and hierarchical decoding. The Prague Bulletin of Mathematical Linguistics, 101:29–41. J¨org Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria. Dan Tufis, Radu Ion, Alexandru Ceausu, and Dan Stefanescu. 2008. Racai’s linguistic web services. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, 26 May - 1 June 2008, Marrakech, Morocco. D. Vickrey, L. Biewald, M. Teyssier, and D. Koller. 2005. Word-Sense Disambiguation for Machine Translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Vancouver, Canada, October. 1714
2016
161
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715–1725, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Neural Machine Translation of Rare Words with Subword Units Rico Sennrich and Barry Haddow and Alexandra Birch School of Informatics, University of Edinburgh {rico.sennrich,a.birch}@ed.ac.uk, [email protected] Abstract Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character ngram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English→German and English→Russian by up to 1.1 and 1.3 BLEU, respectively. 1 Introduction Neural machine translation has recently shown impressive results (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). However, the translation of rare words is an open problem. The vocabulary of neural models is typically limited to 30 000–50 000 words, but translation is an open-vocabulary probThe research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. Samsung R&D Institute Poland. lem, and especially for languages with productive word formation processes such as agglutination and compounding, translation models require mechanisms that go below the word level. As an example, consider compounds such as the German Abwasser|behandlungs|anlange ‘sewage water treatment plant’, for which a segmented, variable-length representation is intuitively more appealing than encoding the word as a fixed-length vector. For word-level NMT models, the translation of out-of-vocabulary words has been addressed through a back-off to a dictionary look-up (Jean et al., 2015; Luong et al., 2015b). We note that such techniques make assumptions that often do not hold true in practice. For instance, there is not always a 1-to-1 correspondence between source and target words because of variance in the degree of morphological synthesis between languages, like in our introductory compounding example. Also, word-level models are unable to translate or generate unseen words. Copying unknown words into the target text, as done by (Jean et al., 2015; Luong et al., 2015b), is a reasonable strategy for names, but morphological changes and transliteration is often required, especially if alphabets differ. We investigate NMT models that operate on the level of subword units. Our main goal is to model open-vocabulary translation in the NMT network itself, without requiring a back-off model for rare words. In addition to making the translation process simpler, we also find that the subword models achieve better accuracy for the translation of rare words than large-vocabulary models and back-off dictionaries, and are able to productively generate new words that were not seen at training time. Our analysis shows that the neural networks are able to learn compounding and transliteration from subword representations. This paper has two main contributions: • We show that open-vocabulary neural ma1715 chine translation is possible by encoding (rare) words via subword units. We find our architecture simpler and more effective than using large vocabularies and back-off dictionaries (Jean et al., 2015; Luong et al., 2015b). • We adapt byte pair encoding (BPE) (Gage, 1994), a compression algorithm, to the task of word segmentation. BPE allows for the representation of an open vocabulary through a fixed-size vocabulary of variable-length character sequences, making it a very suitable word segmentation strategy for neural network models. 2 Neural Machine Translation We follow the neural machine translation architecture by Bahdanau et al. (2015), which we will briefly summarize here. However, we note that our approach is not specific to this architecture. The neural machine translation system is implemented as an encoder-decoder network with recurrent neural networks. The encoder is a bidirectional neural network with gated recurrent units (Cho et al., 2014) that reads an input sequence x = (x1, ..., xm) and calculates a forward sequence of hidden states (−→h 1, ..., −→h m), and a backward sequence (←−h 1, ..., ←−h m). The hidden states −→h j and ←−h j are concatenated to obtain the annotation vector hj. The decoder is a recurrent neural network that predicts a target sequence y = (y1, ..., yn). Each word yi is predicted based on a recurrent hidden state si, the previously predicted word yi−1, and a context vector ci. ci is computed as a weighted sum of the annotations hj. The weight of each annotation hj is computed through an alignment model αij, which models the probability that yi is aligned to xj. The alignment model is a singlelayer feedforward neural network that is learned jointly with the rest of the network through backpropagation. A detailed description can be found in (Bahdanau et al., 2015). Training is performed on a parallel corpus with stochastic gradient descent. For translation, a beam search with small beam size is employed. 3 Subword Translation The main motivation behind this paper is that the translation of some words is transparent in that they are translatable by a competent translator even if they are novel to him or her, based on a translation of known subword units such as morphemes or phonemes. Word categories whose translation is potentially transparent include: • named entities. Between languages that share an alphabet, names can often be copied from source to target text. Transcription or transliteration may be required, especially if the alphabets or syllabaries differ. Example: Barack Obama (English; German) Áàðàê Îáàìà (Russian) バラク・オバマ(ba-ra-ku o-ba-ma) (Japanese) • cognates and loanwords. Cognates and loanwords with a common origin can differ in regular ways between languages, so that character-level translation rules are sufficient (Tiedemann, 2012). Example: claustrophobia (English) Klaustrophobie (German) Êëàóñòðîôîáèÿ (Klaustrofobiâ) (Russian) • morphologically complex words. Words containing multiple morphemes, for instance formed via compounding, affixation, or inflection, may be translatable by translating the morphemes separately. Example: solar system (English) Sonnensystem (Sonne + System) (German) Naprendszer (Nap + Rendszer) (Hungarian) In an analysis of 100 rare tokens (not among the 50 000 most frequent types) in our German training data1, the majority of tokens are potentially translatable from English through smaller units. We find 56 compounds, 21 names, 6 loanwords with a common origin (emancipate→emanzipieren), 5 cases of transparent affixation (sweetish ‘sweet’ + ‘-ish’ →süßlich ‘süß’ + ‘-lich’), 1 number and 1 computer language identifier. Our hypothesis is that a segmentation of rare words into appropriate subword units is sufficient to allow for the neural translation network to learn transparent translations, and to generalize this knowledge to translate and produce unseen words.2 We provide empirical support for this hy1Primarily parliamentary proceedings and web crawl data. 2Not every segmentation we produce is transparent. While we expect no performance benefit from opaque segmentations, i.e. segmentations where the units cannot be translated independently, our NMT models show robustness towards oversplitting. 1716 pothesis in Sections 4 and 5. First, we discuss different subword representations. 3.1 Related Work For Statistical Machine Translation (SMT), the translation of unknown words has been the subject of intensive research. A large proportion of unknown words are names, which can just be copied into the target text if both languages share an alphabet. If alphabets differ, transliteration is required (Durrani et al., 2014). Character-based translation has also been investigated with phrase-based models, which proved especially successful for closely related languages (Vilar et al., 2007; Tiedemann, 2009; Neubig et al., 2012). The segmentation of morphologically complex words such as compounds is widely used for SMT, and various algorithms for morpheme segmentation have been investigated (Nießen and Ney, 2000; Koehn and Knight, 2003; Virpioja et al., 2007; Stallard et al., 2012). Segmentation algorithms commonly used for phrase-based SMT tend to be conservative in their splitting decisions, whereas we aim for an aggressive segmentation that allows for open-vocabulary translation with a compact network vocabulary, and without having to resort to back-off dictionaries. The best choice of subword units may be taskspecific. For speech recognition, phone-level language models have been used (Bazzi and Glass, 2000). Mikolov et al. (2012) investigate subword language models, and propose to use syllables. For multilingual segmentation tasks, multilingual algorithms have been proposed (Snyder and Barzilay, 2008). We find these intriguing, but inapplicable at test time. Various techniques have been proposed to produce fixed-length continuous word vectors based on characters or morphemes (Luong et al., 2013; Botha and Blunsom, 2014; Ling et al., 2015a; Kim et al., 2015). An effort to apply such techniques to NMT, parallel to ours, has found no significant improvement over word-based approaches (Ling et al., 2015b). One technical difference from our work is that the attention mechanism still operates on the level of words in the model by Ling et al. (2015b), and that the representation of each word is fixed-length. We expect that the attention mechanism benefits from our variable-length representation: the network can learn to place attention on different subword units at each step. Recall our introductory example Abwasserbehandlungsanlange, for which a subword segmentation avoids the information bottleneck of a fixed-length representation. Neural machine translation differs from phrasebased methods in that there are strong incentives to minimize the vocabulary size of neural models to increase time and space efficiency, and to allow for translation without back-off models. At the same time, we also want a compact representation of the text itself, since an increase in text length reduces efficiency and increases the distances over which neural models need to pass information. A simple method to manipulate the trade-off between vocabulary size and text size is to use shortlists of unsegmented words, using subword units only for rare words. As an alternative, we propose a segmentation algorithm based on byte pair encoding (BPE), which lets us learn a vocabulary that provides a good compression rate of the text. 3.2 Byte Pair Encoding (BPE) Byte Pair Encoding (BPE) (Gage, 1994) is a simple data compression technique that iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. We adapt this algorithm for word segmentation. Instead of merging frequent pairs of bytes, we merge characters or character sequences. Firstly, we initialize the symbol vocabulary with the character vocabulary, and represent each word as a sequence of characters, plus a special end-ofword symbol ‘·’, which allows us to restore the original tokenization after translation. We iteratively count all symbol pairs and replace each occurrence of the most frequent pair (‘A’, ‘B’) with a new symbol ‘AB’. Each merge operation produces a new symbol which represents a character n-gram. Frequent character n-grams (or whole words) are eventually merged into a single symbol, thus BPE requires no shortlist. The final symbol vocabulary size is equal to the size of the initial vocabulary, plus the number of merge operations – the latter is the only hyperparameter of the algorithm. For efficiency, we do not consider pairs that cross word boundaries. The algorithm can thus be run on the dictionary extracted from a text, with each word being weighted by its frequency. A minimal Python implementation is shown in Al1717 Algorithm 1 Learn BPE operations import re, collections def get_stats(vocab): pairs = collections.defaultdict(int) for word, freq in vocab.items(): symbols = word.split() for i in range(len(symbols)-1): pairs[symbols[i],symbols[i+1]] += freq return pairs def merge_vocab(pair, v_in): v_out = {} bigram = re.escape(' '.join(pair)) p = re.compile(r'(?<!\S)' + bigram + r'(?!\S)') for word in v_in: w_out = p.sub(''.join(pair), word) v_out[w_out] = v_in[word] return v_out vocab = {'l o w </w>' : 5, 'l o w e r </w>' : 2, 'n e w e s t </w>':6, 'w i d e s t </w>':3} num_merges = 10 for i in range(num_merges): pairs = get_stats(vocab) best = max(pairs, key=pairs.get) vocab = merge_vocab(best, vocab) print(best) r · → r· l o → lo lo w → low e r· → er· Figure 1: BPE merge operations learned from dictionary {‘low’, ‘lowest’, ‘newer’, ‘wider’}. gorithm 1. In practice, we increase efficiency by indexing all pairs, and updating data structures incrementally. The main difference to other compression algorithms, such as Huffman encoding, which have been proposed to produce a variable-length encoding of words for NMT (Chitnis and DeNero, 2015), is that our symbol sequences are still interpretable as subword units, and that the network can generalize to translate and produce new words (unseen at training time) on the basis of these subword units. Figure 1 shows a toy example of learned BPE operations. At test time, we first split words into sequences of characters, then apply the learned operations to merge the characters into larger, known symbols. This is applicable to any word, and allows for open-vocabulary networks with fixed symbol vocabularies.3 In our example, the OOV ‘lower’ would be segmented into ‘low er·’. 3The only symbols that will be unknown at test time are unknown characters, or symbols of which all occurrences in the training text have been merged into larger symbols, like ‘safeguar’, which has all occurrences in our training text merged into ‘safeguard’. We observed no such symbols at test time, but the issue could be easily solved by recursively reversing specific merges until all symbols are known. We evaluate two methods of applying BPE: learning two independent encodings, one for the source, one for the target vocabulary, or learning the encoding on the union of the two vocabularies (which we call joint BPE).4 The former has the advantage of being more compact in terms of text and vocabulary size, and having stronger guarantees that each subword unit has been seen in the training text of the respective language, whereas the latter improves consistency between the source and the target segmentation. If we apply BPE independently, the same name may be segmented differently in the two languages, which makes it harder for the neural models to learn a mapping between the subword units. To increase the consistency between English and Russian segmentation despite the differing alphabets, we transliterate the Russian vocabulary into Latin characters with ISO-9 to learn the joint BPE encoding, then transliterate the BPE merge operations back into Cyrillic to apply them to the Russian training text.5 4 Evaluation We aim to answer the following empirical questions: • Can we improve the translation of rare and unseen words in neural machine translation by representing them via subword units? • Which segmentation into subword units performs best in terms of vocabulary size, text size, and translation quality? We perform experiments on data from the shared translation task of WMT 2015. For English→German, our training set consists of 4.2 million sentence pairs, or approximately 100 million tokens. For English→Russian, the training set consists of 2.6 million sentence pairs, or approximately 50 million tokens. We tokenize and truecase the data with the scripts provided in Moses (Koehn et al., 2007). We use newstest2013 as development set, and report results on newstest2014 and newstest2015. We report results with BLEU (mteval-v13a.pl), and CHRF3 (Popovi´c, 2015), a character n-gram F3 score which was found to correlate well with 4In practice, we simply concatenate the source and target side of the training set to learn joint BPE. 5Since the Russian training text also contains words that use the Latin alphabet, we also apply the Latin BPE operations. 1718 human judgments, especially for translations out of English (Stanojevi´c et al., 2015). Since our main claim is concerned with the translation of rare and unseen words, we report separate statistics for these. We measure these through unigram F1, which we calculate as the harmonic mean of clipped unigram precision and recall.6 We perform all experiments with Groundhog7 (Bahdanau et al., 2015). We generally follow settings by previous work (Bahdanau et al., 2015; Jean et al., 2015). All networks have a hidden layer size of 1000, and an embedding layer size of 620. Following Jean et al. (2015), we only keep a shortlist of τ = 30000 words in memory. During training, we use Adadelta (Zeiler, 2012), a minibatch size of 80, and reshuffle the training set between epochs. We train a network for approximately 7 days, then take the last 4 saved models (models being saved every 12 hours), and continue training each with a fixed embedding layer (as suggested by (Jean et al., 2015)) for 12 hours. We perform two independent training runs for each models, once with cut-off for gradient clipping (Pascanu et al., 2013) of 5.0, once with a cut-off of 1.0 – the latter produced better single models for most settings. We report results of the system that performed best on our development set (newstest2013), and of an ensemble of all 8 models. We use a beam size of 12 for beam search, with probabilities normalized by sentence length. We use a bilingual dictionary based on fast-align (Dyer et al., 2013). For our baseline, this serves as back-off dictionary for rare words. We also use the dictionary to speed up translation for all experiments, only performing the softmax over a filtered list of candidate translations (like Jean et al. (2015), we use K = 30000; K′ = 10). 4.1 Subword statistics Apart from translation quality, which we will verify empirically, our main objective is to represent an open vocabulary through a compact fixed-size subword vocabulary, and allow for efficient training and decoding.8 Statistics for different segmentations of the Ger6Clipped unigram precision is essentially 1-gram BLEU without brevity penalty. 7github.com/sebastien-j/LV_groundhog 8The time complexity of encoder-decoder architectures is at least linear to sequence length, and oversplitting harms efficiency. man side of the parallel data are shown in Table 1. A simple baseline is the segmentation of words into character n-grams.9 Character n-grams allow for different trade-offs between sequence length (# tokens) and vocabulary size (# types), depending on the choice of n. The increase in sequence length is substantial; one way to reduce sequence length is to leave a shortlist of the k most frequent word types unsegmented. Only the unigram representation is truly open-vocabulary. However, the unigram representation performed poorly in preliminary experiments, and we report translation results with a bigram representation, which is empirically better, but unable to produce some tokens in the test set with the training set vocabulary. We report statistics for several word segmentation techniques that have proven useful in previous SMT research, including frequency-based compound splitting (Koehn and Knight, 2003), rulebased hyphenation (Liang, 1983), and Morfessor (Creutz and Lagus, 2002). We find that they only moderately reduce vocabulary size, and do not solve the unknown word problem, and we thus find them unsuitable for our goal of open-vocabulary translation without back-off dictionary. BPE meets our goal of being open-vocabulary, and the learned merge operations can be applied to the test set to obtain a segmentation with no unknown symbols.10 Its main difference from the character-level model is that the more compact representation of BPE allows for shorter sequences, and that the attention model operates on variable-length units.11 Table 1 shows BPE with 59 500 merge operations, and joint BPE with 89 500 operations. In practice, we did not include infrequent subword units in the NMT network vocabulary, since there is noise in the subword symbol sets, e.g. because of characters from foreign alphabets. Hence, our network vocabularies in Table 2 are typically slightly smaller than the number of types in Table 1. 9Our character n-grams do not cross word boundaries. We mark whether a subword is word-final or not with a special character, which allows us to restore the original tokenization. 10Joint BPE can produce segments that are unknown because they only occur in the English training text, but these are rare (0.05% of test tokens). 11We highlighted the limitations of word-level attention in section 3.1. At the other end of the spectrum, the character level is suboptimal for alignment (Tiedemann, 2009). 1719 vocabulary BLEU CHRF3 unigram F1 (%) name segmentation shortlist source target single ens-8 single ens-8 all rare OOV syntax-based (Sennrich and Haddow, 2015) 24.4 55.3 - 59.1 46.0 37.7 WUnk - 300 000 500 000 20.6 22.8 47.2 48.9 56.7 20.4 0.0 WDict - 300 000 500 000 22.0 24.2 50.5 52.4 58.1 36.8 36.8 C2-50k char-bigram 50 000 60 000 60 000 22.8 25.3 51.9 53.5 58.4 40.5 30.9 BPE-60k BPE 60 000 60 000 21.5 24.5 52.0 53.9 58.4 40.9 29.3 BPE-J90k BPE (joint) 90 000 90 000 22.8 24.7 51.7 54.1 58.5 41.8 33.6 Table 2: English→German translation performance (BLEU, CHRF3 and unigram F1) on newstest2015. Ens-8: ensemble of 8 models. Best NMT system in bold. Unigram F1 (with ensembles) is computed for all words (n = 44085), rare words (not among top 50 000 in training set; n = 2900), and OOVs (not in training set; n = 1168). segmentation # tokens # types # UNK none 100 m 1 750 000 1079 characters 550 m 3000 0 character bigrams 306 m 20 000 34 character trigrams 214 m 120 000 59 compound splitting△ 102 m 1 100 000 643 morfessor* 109 m 544 000 237 hyphenation⋄ 186 m 404 000 230 BPE 112 m 63 000 0 BPE (joint) 111 m 82 000 32 character bigrams 129 m 69 000 34 (shortlist: 50 000) Table 1: Corpus statistics for German training corpus with different word segmentation techniques. #UNK: number of unknown tokens in newstest2013. △: (Koehn and Knight, 2003); *: (Creutz and Lagus, 2002); ⋄: (Liang, 1983). 4.2 Translation experiments English→German translation results are shown in Table 2; English→Russian results in Table 3. Our baseline WDict is a word-level model with a back-off dictionary. It differs from WUnk in that the latter uses no back-off dictionary, and just represents out-of-vocabulary words as UNK12. The back-off dictionary improves unigram F1 for rare and unseen words, although the improvement is smaller for English→Russian, since the back-off dictionary is incapable of transliterating names. All subword systems operate without a back-off dictionary. We first focus on unigram F1, where all systems improve over the baseline, especially for rare words (36.8%→41.8% for EN→DE; 26.5%→29.7% for EN→RU). For OOVs, the baseline strategy of copying unknown words works well for English→German. However, when alphabets differ, like in English→Russian, the subword models do much better. 12We use UNK for words that are outside the model vocabulary, and OOV for those that do not occur in the training text. Unigram F1 scores indicate that learning the BPE symbols on the vocabulary union (BPEJ90k) is more effective than learning them separately (BPE-60k), and more effective than using character bigrams with a shortlist of 50 000 unsegmented words (C2-50k), but all reported subword segmentations are viable choices and outperform the back-off dictionary baseline. Our subword representations cause big improvements in the translation of rare and unseen words, but these only constitute 9-11% of the test sets. Since rare words tend to carry central information in a sentence, we suspect that BLEU and CHRF3 underestimate their effect on translation quality. Still, we also see improvements over the baseline in total unigram F1, as well as BLEU and CHRF3, and the subword ensembles outperform the WDict baseline by 0.3–1.3 BLEU and 0.6–2 CHRF3. There is some inconsistency between BLEU and CHRF3, which we attribute to the fact that BLEU has a precision bias, and CHRF3 a recall bias. For English→German, we observe the best BLEU score of 25.3 with C2-50k, but the best CHRF3 score of 54.1 with BPE-J90k. For comparison to the (to our knowledge) best non-neural MT system on this data set, we report syntaxbased SMT results (Sennrich and Haddow, 2015). We observe that our best systems outperform the syntax-based system in terms of BLEU, but not in terms of CHRF3. Regarding other neural systems, Luong et al. (2015a) report a BLEU score of 25.9 on newstest2015, but we note that they use an ensemble of 8 independently trained models, and also report strong improvements from applying dropout, which we did not use. We are confident that our improvements to the translation of rare words are orthogonal to improvements achievable through other improvements in the network archi1720 tecture, training algorithm, or better ensembles. For English→Russian, the state of the art is the phrase-based system by Haddow et al. (2015). It outperforms our WDict baseline by 1.5 BLEU. The subword models are a step towards closing this gap, and BPE-J90k yields an improvement of 1.3 BLEU, and 2.0 CHRF3, over WDict. As a further comment on our translation results, we want to emphasize that performance variability is still an open problem with NMT. On our development set, we observe differences of up to 1 BLEU between different models. For single systems, we report the results of the model that performs best on dev (out of 8), which has a stabilizing effect, but how to control for randomness deserves further attention in future research. 5 Analysis 5.1 Unigram accuracy Our main claims are that the translation of rare and unknown words is poor in word-level NMT models, and that subword models improve the translation of these word types. To further illustrate the effect of different subword segmentations on the translation of rare and unseen words, we plot target-side words sorted by their frequency in the training set.13 To analyze the effect of vocabulary size, we also include the system C2-3/500k, which is a system with the same vocabulary size as the WDict baseline, and character bigrams to represent unseen words. Figure 2 shows results for the English–German ensemble systems on newstest2015. Unigram F1 of all systems tends to decrease for lowerfrequency words. The baseline system has a spike in F1 for OOVs, i.e. words that do not occur in the training text. This is because a high proportion of OOVs are names, for which a copy from the source to the target text is a good strategy for English→German. The systems with a target vocabulary of 500 000 words mostly differ in how well they translate words with rank > 500 000. A back-off dictionary is an obvious improvement over producing UNK, but the subword system C2-3/500k achieves better performance. Note that all OOVs that the backoff dictionary produces are words that are copied from the source, usually names, while the subword 13We perform binning of words with the same training set frequency, and apply bezier smoothing to the graph. systems can productively form new words such as compounds. For the 50 000 most frequent words, the representation is the same for all neural networks, and all neural networks achieve comparable unigram F1 for this category. For the interval between frequency rank 50 000 and 500 000, the comparison between C2-3/500k and C2-50k unveils an interesting difference. The two systems only differ in the size of the shortlist, with C2-3/500k representing words in this interval as single units, and C250k via subword units. We find that the performance of C2-3/500k degrades heavily up to frequency rank 500 000, at which point the model switches to a subword representation and performance recovers. The performance of C2-50k remains more stable. We attribute this to the fact that subword units are less sparse than words. In our training set, the frequency rank 50 000 corresponds to a frequency of 60 in the training data; the frequency rank 500 000 to a frequency of 2. Because subword representations are less sparse, reducing the size of the network vocabulary, and representing more words via subword units, can lead to better performance. The F1 numbers hide some qualitative differences between systems. For English→German, WDict produces few OOVs (26.5% recall), but with high precision (60.6%) , whereas the subword systems achieve higher recall, but lower precision. We note that the character bigram model C2-50k produces the most OOV words, and achieves relatively low precision of 29.1% for this category. However, it outperforms the back-off dictionary in recall (33.0%). BPE-60k, which suffers from transliteration (or copy) errors due to segmentation inconsistencies, obtains a slightly better precision (32.4%), but a worse recall (26.6%). In contrast to BPE-60k, the joint BPE encoding of BPEJ90k improves both precision (38.6%) and recall (29.8%). For English→Russian, unknown names can only rarely be copied, and usually require transliteration. Consequently, the WDict baseline performs more poorly for OOVs (9.2% precision; 5.2% recall), and the subword models improve both precision and recall (21.9% precision and 15.6% recall for BPE-J90k). The full unigram F1 plot is shown in Figure 3. 1721 vocabulary BLEU CHRF3 unigram F1 (%) name segmentation shortlist source target single ens-8 single ens-8 all rare OOV phrase-based (Haddow et al., 2015) 24.3 53.8 - 56.0 31.3 16.5 WUnk - 300 000 500 000 18.8 22.4 46.5 49.9 54.2 25.2 0.0 WDict - 300 000 500 000 19.1 22.8 47.5 51.0 54.8 26.5 6.6 C2-50k char-bigram 50 000 60 000 60 000 20.9 24.1 49.0 51.6 55.2 27.8 17.4 BPE-60k BPE 60 000 60 000 20.5 23.6 49.8 52.7 55.3 29.7 15.6 BPE-J90k BPE (joint) 90 000 100 000 20.4 24.1 49.7 53.0 55.8 29.7 18.3 Table 3: English→Russian translation performance (BLEU, CHRF3 and unigram F1) on newstest2015. Ens-8: ensemble of 8 models. Best NMT system in bold. Unigram F1 (with ensembles) is computed for all words (n = 55654), rare words (not among top 50 000 in training set; n = 5442), and OOVs (not in training set; n = 851). 100 101 102 103 104 105 106 0 0.2 0.4 0.6 0.8 1 50 000 500 000 training set frequency rank unigram F1 BPE-J90k C2-50k C2-300/500k WDict WUnk Figure 2: English→German unigram F1 on newstest2015 plotted by training set frequency rank for different NMT systems. 100 101 102 103 104 105 106 0 0.2 0.4 0.6 0.8 1 50 000 500 000 training set frequency rank unigram F1 BPE-J90k C2-50k WDict WUnk Figure 3: English→Russian unigram F1 on newstest2015 plotted by training set frequency rank for different NMT systems. 5.2 Manual Analysis Table 4 shows two translation examples for the translation direction English→German, Table 5 for English→Russian. The baseline system fails for all of the examples, either by deleting content (health), or by copying source words that should be translated or transliterated. The subword translations of health research institutes show that the subword systems are capable of learning translations when oversplitting (research→Fo|rs|ch|un|g), or when the segmentation does not match morpheme boundaries: the segmentation Forschungs|instituten would be linguistically more plausible, and simpler to align to the English research institutes, than the segmentation Forsch|ungsinstitu|ten in the BPE-60k system, but still, a correct translation is produced. If the systems have failed to learn a translation due to data sparseness, like for asinine, which should be translated as dumm, we see translations that are wrong, but could be plausible for (partial) loanwords (asinine Situation→Asinin-Situation). The English→Russian examples show that the subword systems are capable of transliteration. However, transliteration errors do occur, either due to ambiguous transliterations, or because of non-consistent segmentations between source and target text which make it hard for the system to learn a transliteration mapping. Note that the BPE-60k system encodes Mirzayeva inconsistently for the two language pairs (Mirz|ayeva→Ìèð|çà|åâà Mir|za|eva). This example is still translated correctly, but we observe spurious insertions and deletions of characters in the BPE-60k system. An example is the transliteration of rakfisk, where a ï is inserted and a ê is deleted. We trace this error back to translation pairs in the training data with inconsistent segmentations, such as (p|rak|ri|ti→ïðà|êðèò|è 1722 system sentence source health research institutes reference Gesundheitsforschungsinstitute WDict Forschungsinstitute C2-50k Fo|rs|ch|un|gs|in|st|it|ut|io|ne|n BPE-60k Gesundheits|forsch|ungsinstitu|ten BPE-J90k Gesundheits|forsch|ungsin|stitute source asinine situation reference dumme Situation WDict asinine situation →UNK →asinine C2-50k as|in|in|e situation →As|in|en|si|tu|at|io|n BPE-60k as|in|ine situation →A|in|line-|Situation BPE-J90K as|in|ine situation →As|in|in-|Situation Table 4: English→German translation example. “|” marks subword boundaries. system sentence source Mirzayeva reference Ìèðçàåâà (Mirzaeva) WDict Mirzayeva →UNK →Mirzayeva C2-50k Mi|rz|ay|ev|a →Ìè|ðç|àå|âà (Mi|rz|ae|va) BPE-60k Mirz|ayeva →Ìèð|çà|åâà (Mir|za|eva) BPE-J90k Mir|za|yeva →Ìèð|çà|åâà (Mir|za|eva) source rakfisk reference ðàêôèñêà (rakfiska) WDict rakfisk →UNK →rakfisk C2-50k ra|kf|is|k →ðà|êô|èñ|ê (ra|kf|is|k) BPE-60k rak|f|isk →ïðà|ô|èñê (pra|f|isk) BPE-J90k rak|f|isk →ðàê|ô|èñêà (rak|f|iska) Table 5: English→Russian translation examples. “|” marks subword boundaries. (pra|krit|i)), from which the translation (rak→ïðà) is erroneously learned. The segmentation of the joint BPE system (BPE-J90k) is more consistent (pra|krit|i→ïðà|êðèò|è (pra|krit|i)). 6 Conclusion The main contribution of this paper is that we show that neural machine translation systems are capable of open-vocabulary translation by representing rare and unseen words as a sequence of subword units.14 This is both simpler and more effective than using a back-off translation model. We introduce a variant of byte pair encoding for word segmentation, which is capable of encoding open vocabularies with a compact symbol vocabulary of variable-length subword units. We show performance gains over the baseline with both BPE segmentation, and a simple character bigram segmentation. Our analysis shows that not only out-ofvocabulary words, but also rare in-vocabulary words are translated poorly by our baseline NMT 14The source code of the segmentation algorithms is available at https://github.com/rsennrich/ subword-nmt. system, and that reducing the vocabulary size of subword models can actually improve performance. In this work, our choice of vocabulary size is somewhat arbitrary, and mainly motivated by comparison to prior work. One avenue of future research is to learn the optimal vocabulary size for a translation task, which we expect to depend on the language pair and amount of training data, automatically. We also believe there is further potential in bilingually informed segmentation algorithms to create more alignable subword units, although the segmentation algorithm cannot rely on the target text at runtime. While the relative effectiveness will depend on language-specific factors such as vocabulary size, we believe that subword segmentations are suitable for most language pairs, eliminating the need for large NMT vocabularies or back-off models. Acknowledgments We thank Maja Popovi´c for her implementation of CHRF, with which we verified our reimplementation. The research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland. This project received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 645452 (QT21). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations (ICLR). Issam Bazzi and James R. Glass. 2000. Modeling outof-vocabulary words for robust speech recognition. In Sixth International Conference on Spoken Language Processing, ICSLP 2000 / INTERSPEECH 2000, pages 401–404, Beijing, China. Jan A. Botha and Phil Blunsom. 2014. Compositional Morphology for Word Representations and Language Modelling. In Proceedings of the 31st International Conference on Machine Learning (ICML), Beijing, China. Rohan Chitnis and John DeNero. 2015. VariableLength Word Encodings for Neural Translation Models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1723 Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2002. Unsupervised Discovery of Morphemes. In Proceedings of the ACL-02 Workshop on Morphological and Phonological Learning, pages 21–30. Association for Computational Linguistics. Nadir Durrani, Hassan Sajjad, Hieu Hoang, and Philipp Koehn. 2014. Integrating an Unsupervised Transliteration Model into Statistical Machine Translation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2014, pages 148–153, Gothenburg, Sweden. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A Simple, Fast, and Effective Reparameterization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. Philip Gage. 1994. A New Algorithm for Data Compression. C Users J., 12(2):23–38, February. Barry Haddow, Matthias Huck, Alexandra Birch, Nikolay Bogoychev, and Philipp Koehn. 2015. The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 126–133, Lisbon, Portugal. Association for Computational Linguistics. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On Using Very Large Target Vocabulary for Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10, Beijing, China. Association for Computational Linguistics. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle. Association for Computational Linguistics. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2015. Character-Aware Neural Language Models. CoRR, abs/1508.06615. Philipp Koehn and Kevin Knight. 2003. Empirical Methods for Compound Splitting. In EACL ’03: Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics, pages 187–193, Budapest, Hungary. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Franklin M. Liang. 1983. Word hy-phen-a-tion by com-put-er. Ph.D. thesis, Stanford University, Department of Linguistics, Stanford, CA. Wang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015a. Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1520– 1530, Lisbon, Portugal. Association for Computational Linguistics. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015b. Character-based Neural Machine Translation. ArXiv e-prints, November. Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better Word Representations with Recursive Neural Networks for Morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL 2013, Sofia, Bulgaria, August 8-9, 2013, pages 104– 113. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412– 1421, Lisbon, Portugal. Association for Computational Linguistics. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the Rare Word Problem in Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11–19, Beijing, China. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Anoop Deoras, HaiSon Le, Stefan Kombrink, and Jan Cernocký. 2012. Subword Language Modeling with Neural Networks. Unpublished. 1724 Graham Neubig, Taro Watanabe, Shinsuke Mori, and Tatsuya Kawahara. 2012. Machine Translation without Words through Substring Alignment. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 1: Long Papers, pages 165–174. Sonja Nießen and Hermann Ney. 2000. Improving SMT quality with morpho-syntactic analysis. In 18th Int. Conf. on Computational Linguistics, pages 1081–1085. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, pages 1310–1318, Atlanta, USA. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Rico Sennrich and Barry Haddow. 2015. A Joint Dependency Model of Morphological and Syntactic Structure for Statistical Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2081–2087, Lisbon, Portugal. Association for Computational Linguistics. Benjamin Snyder and Regina Barzilay. 2008. Unsupervised Multilingual Learning for Morphological Segmentation. In Proceedings of ACL-08: HLT, pages 737–745, Columbus, Ohio. Association for Computational Linguistics. David Stallard, Jacob Devlin, Michael Kayser, Yoong Keok Lee, and Regina Barzilay. 2012. Unsupervised Morphology Rivals Supervised Morphology for Arabic MT. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 2: Short Papers, pages 322– 327. Miloš Stanojevi´c, Amir Kamran, Philipp Koehn, and Ondˇrej Bojar. 2015. Results of the WMT15 Metrics Shared Task. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 256– 273, Lisbon, Portugal. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112, Montreal, Quebec, Canada. Jörg Tiedemann. 2009. Character-based PSMT for Closely Related Languages. In Proceedings of 13th Annual Conference of the European Association for Machine Translation (EAMT’09), pages 12–19. Jörg Tiedemann. 2012. Character-Based Pivot Translation for Under-Resourced Languages and Domains. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 141–151, Avignon, France. Association for Computational Linguistics. David Vilar, Jan-Thorsten Peter, and Hermann Ney. 2007. Can We Translate Letters? In Second Workshop on Statistical Machine Translation, pages 33– 39, Prague, Czech Republic. Association for Computational Linguistics. Sami Virpioja, Jaakko J. Väyrynen, Mathias Creutz, and Markus Sadeniemi. 2007. Morphology-Aware Statistical Machine Translation Based on Morphs Induced in an Unsupervised Manner. In Proceedings of the Machine Translation Summit XI, pages 491–498, Copenhagen, Denmark. Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. CoRR, abs/1212.5701. 1725
2016
162
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1726–1735, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Implicit Discourse Relation Detection via a Deep Architecture with Gated Relevance Network Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, Xuanjing Huang Shanghai Key Laboratory of Data Science School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China jfchen14,qz,pfliu14,xpqiu,[email protected] Abstract Word pairs, which are one of the most easily accessible features between two text segments, have been proven to be very useful for detecting the discourse relations held between text segments. However, because of the data sparsity problem, the performance achieved by using word pair features is limited. In this paper, in order to overcome the data sparsity problem, we propose the use of word embeddings to replace the original words. Moreover, we adopt a gated relevance network to capture the semantic interaction between word pairs, and then aggregate those semantic interactions using a pooling layer to select the most informative interactions. Experimental results on Penn Discourse Tree Bank show that the proposed method without using manually designed features can achieve better performance on recognizing the discourse level relations in all of the relations. 1 Introduction In a well-written document, no unit of the text is completely isolated, discourse relations describe how two units (e.g. clauses, sentences, and larger multi-clause groupings) of discourse are logically connected. Many downstream NLP applications such as opinion mining, summarization, and event detection, can benefit from those relations. The task of automatically identify discourse relation is relatively simple when explicit connectives such as however and because are given (Pitler et al., 2009). However, the identification becomes much more challenging when such connectives are missing. In fact, such implicit discourse relations outnumber explicit relations in naturally occurring text, and identify those relations have been shown to be the performance bottleneck of an end-to-end discourse parser (Lin et al., 2014). Most of the existing researches used rich linguistic features and supervised learning methods to achieve the task (Soricut and Marcu, 2003; Pitler et al., 2009; Rutherford and Xue, 2014). Among their works, word pairs are heavily used as an important feature, since word pairs like (warm,cold) might directly trigger a contrast relation. However, because of the data sparsity problem (McKeown and Biran, 2013) and the lack of metrics to measure the semantic relation between those pairs, which is so-called the semantic gap problem (Zhao and Grosky, 2002), the classifiers based on word pairs in the previous studies did not work well. Moreover, some text segment pairs are more complicated, it is hard to determine the relation held between them using only word pairs. Consider the following sentence pair with a casual relation as an example: S1: Psyllium’s not a good crop. S2: You get a rain at the wrong time and the crop is ruined. Intuitively, (good, wrong) and (good, ruined), seem to be the most informative word pairs, and it is likely that they will trigger a contrast relation. Therefore, we can see that another main disadvantage of using word pairs is the lack of contextual information, and using n-gram pairs will again suffer from data sparsity problem. Recently, the distributed word representations (Bengio et al., 2006; Mikolov et al., 2013) have shown an advantage when dealing with data sparsity problem (Braud and Denis, 2015), and many deep learning based models are generating substantial interests in text semantic matching and have achieved some significant progresses (Hu 1726 Psyllium's not a good crop. You get a rain at the wrong time and the crop is ruined. f Bidirectional LSTM Pooling Layer MLP Gated Relevance Network Bilinear Tensor Single Layer Network Figure 1: The processing framework of the proposed approach. et al., 2014; Qiu and Huang, 2015; Wan et al., 2015). Inspired by their work, we in this paper propose the use of word embeddings to replace the original words in the text segments to fight against the data sparsity problem. Further more, in order to preserve the contextual information around the word embeddings, we encode the text segment to its positional representation via a recurrent neural network, specifically, we use a bidirectional LSTM (Hochreiter and Schmidhuber, 1997). Then, to overcome the semantic gap, we propose the use of a gated relevance network to capture the semantic interaction between those positional representations. Finally, all the interactions generated by the relevance network are fed to a max pooling layer to get the strongest interactions. We then aggregate them to predict the discourse relation through a multi-layer perceptron (MLP). Our model is trained end to end by BackPropagation and Adagrad. The main contribution of this paper can be summarized as follows: • We use word embeddings to replace the original words in the text segments to overcome data sparsity problem. In order to preserve the contextual information, we further encode the text segment to its positional representation through a recurrent neural network. • To deal with the semantic gap problem, we adopt a gated relevance network to capture the semantic interaction between the intermediate representations of the text segments. • Experimental results on PDTB (Prasad et al., 2008) show that the proposed method can achieve better performance in recognizing discourse level relations in all of the relations than the previous methods. 2 The Proposed Method The architecture of our proposed method is shown in figure 1. In the following of this section, we will illustrate the details of the proposed framework. 2.1 Embedding Layer To model the sentences with neural model, we firstly need to transform the one-hot representation of word into the distributed representation. All words of two text segments X and Y will be mapped into low dimensional vector representations, which are taken as input of the network. 1727 Through this layer, we can filter the words appear in low frequency, and we then map these words to a special OOV (out of vocabulary) word embedding. In addition, all the text segments in our experiment are padded to have the same length. 2.2 Sentence Modeling with LSTM Long Short-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) is a type of recurrent neural network (RNN), and specifically addresses the issue of learning long-term dependencies. Given a variable-length sentence S = (x0, x1, ..., xT ), LSTM processes it by incrementally adding up new content into a single slot of maintained memory, with gates controlling the extent to which new content should be memorized, old content should be erased and current content should be exposed. At position t, the memory ct and the hidden state ht are updated with the following equations:   ˜ct ot it ft  =   tanh σ σ σ  TA,b  xt ht−1  , (1) ct = ˜ct ⊙it + ct−1 ⊙ft, (2) ht = ot ⊙tanh (ct) , (3) where it, ft, ot, denote the input, forget and output gate at time step t respectively, and TA,b is an affine transformation which depends on parameters of the network A and b. σ denotes the logistic sigmoid function and ⊙denotes elementwise multiplication. Notice that the LSTM defined above only get context information from the past. However, context information from the future could also be crucial. To capture the context from both past and the future, we propose to use the bidirectional LSTM (Schuster and Paliwal, 1997). Bidirectional LSTM preserves the previous and future context information by two separate LSTMs, one encodes the sentence from start to the end, and the other encodes the sentence from end to the start. Therefore, at each position t of the sentence, we can obtain two representations −→ ht and ←− ht. It is natural to concatenate them to get the intermediate representation at position t, i.e. ht = [−→ ht, ←− ht]. A illustration for the bidirectional LSTM are shown in Figure 2. Given a sentence S = (x0, x1, ..., xT ), we can now encode it with a bidirectional LSTM, and reOutputs Backward Layer Forward Layer Input ht-1 ht ht+1 xt-1 xt xt+1 ht-1 ht ht+1 ht-1 ht ht+1 ··· ··· ··· ··· Figure 2: A illustration of bidirectional LSTM. place the word wt with ht, we can interpret ht as a representation summarizing the word at position t and its contextual information. 2.3 Gated Relevance Network Given two text segments X = x1, x2, ..., xn and Y = y1, y2, ..., ym, after the encoding procedure with a bidirectional LSTM, we can get their positional representation Xh = xh1, xh2, ..., xhn and Yh = yh1, yh2, ..., yhm. We then compute the relevance score between every intermediate representation pair xhi and yhj with dimension dh. Traditional ways to measure their relevance includes cosine distance, bilinear model (Sutskever et al., 2009; Jenatton et al., 2012), single layer neural network (Collobert and Weston, 2008), etc. We then illustrate the bilinear model and the single layer neural network in details. Bilinear Model is defined as follows: s(hxi, hyj) = hT xiMhyj, (4) where the only parameter M ∈Rdh×dh. The bilinear model is a simple but efficient way to incorporate the strong linear interactions between two vectors, while the main weakness of it is the lack of ability to deal with nonlinear interaction. Single Layer Network is defined as: s(hxi, hyj) = uT f(V  hxi hyj  + b), (5) where f is a standard nonlinearity applied element-wise, V ∈Rk×2dh, b ∈Rk, and u ∈Rk. The single layer network could capture nonlinear interaction, while at the expense of a weak interaction between two vectors. Each of the two models have its own advantages, and they can not take the place of each other. 1728 In our work, we propose to incorporate the two models through the gate mechanism, so that the model is more powerful to capture more complex semantic interactions. The incorporated model, namely gated relevance network (GRN), is defined as: s(hxi, hyj) = uT (g ⊙hT xiM[1:r]hyj + (1 −g) ⊙f(V  hxi hyj  ) + b), (6) where f is a standard nonlinearity applied element-wise, M[1:r] ∈Rr×dh×dh is a bilinear tensor and the tensor product hT xiM[1:r]hyj results in a vector m ∈Rr, where each entry is computed by one slice k = 1, 2, ..., r of the tensor: mk = hT xiMkhyj, V ∈Rr×2dh, b ∈Rr, and u ∈Rr, g is a gate expressing how the output is produced by the linear and nonlinear semantic interactions between the input, defined as: g = σ(Wg  hxi hyj  + bg), (7) where Wg ∈Rr×2dh, b ∈Rr and σ denotes the logistic sigmoid function. The gated relevance network is a little bit similar to the Neural Tensor Network (NTN) proposed by Socher et al. (2013): s(hxi, hyj) = uT f(hT xiM[1:r]hyj+V  hxi hyj  +b). (8) Compared with NTN, the main advantage of our model is we use a gate to tell how the linear and nonlinear interaction should be combined, while in NTN, the interaction generated by bilinear model and single layer network are treated equally. Also, NTN feeds the incorporated interaction to a nonlinearity, while we are not. As we can see, for each pair of the intermediate representation, the gated relevance network will produce a semantic interaction score, thus, the entire output of two text segments is an interaction score matrix. 2.4 Max-Pooling Layer and MLP The relation between two text segments is often determined by some strong semantic interactions, therefore, we adopt max-pooling strategy which partitions the score matrix as shown in Figure 1 into a set of non-overlapping sub-regions, and for each such sub-region, outputs the maximum value. The pooling scores are further reshaped to a vector and fed to a multi-layer perceptron (MLP). More specifically, the vector obtained by the pooling layer is fed into a full connection hidden layer to get a more abstractive representation first, and then connect to the output layer. For the task of classification, the outputs are probabilities of different classes, which is computed by a softmax function after the fully-connected layer. We name the full architecture of our model Bi-LSTM+GRN. 2.5 Model Training Given a text segment pair (X, Y ) and its label l, the training objective is to minimize the crossentropy of the predicted and the true label distributions, defined as: L(X, Y ; l,ˆl) = − C X j=1 lj log(ˆlj), (9) where l is one-hot representation of the groundtruth label l; ˆl is the predicted probabilities of labels; C is the class number. To minimize the objective, we use stochastic gradient descent with the diagonal variant of AdaGrad (Duchi et al., 2011) with minibatches. The parameter update for the i-th parameter θt,i at time step t is as follows: θt,i = θt−1,i − α qPt τ=1 g2 τ,i gt,i, (10) where α is the initial learning rate and gτ ∈R|θτ,i| is the gradient at time step τ for parameter θτ,i. 3 Experiment 3.1 Dataset The dataset we used in this work is Penn Discourse Treebank 2.0 (Prasad et al., 2008), which is one of the largest available annotated corpora of discourse relations. It contains 40,600 relations, which are manually annotated from the same 2,312 Wall Street Journal (WSJ) articles as the Penn Treebank. We follow the recommended section partition of PDTB 2.0, which is to use sections 2-20 for training, sections 21-22 for testing and the other sections for validation (Prasad et al., 2008). For comparison with the previous work (Pitler et al., 2009; Zhou et al., 2010; Park 1729 Table 1: The unbalanced sample distribution of PDTB. Relation Train Dev Test Comparison 1894 401 146 Contingency 3281 628 276 Expansion 6792 1253 556 Temporal 665 93 68 and Cardie, 2012; Rutherford and Xue, 2014; Ji and Eisenstein, 2015), we train four binary classifiers to identify each of the top level relations, the EntRel relations are merged with Expansion relations. For each classifier, we use an equal number of positive and negative samples as training data, because each of the relations except Expansion is infrequent (Pitler et al., 2009) as what shows in Table 1. The negative samples were chosen randomly from training sections 2-20. 3.2 Experiment Protocols In this part, we will mainly introduce the experiment settings, including baselines and parameter setting. 3.2.1 Baselines The baselines for comparison with our proposed method are listed as follows: • LSTM: We use two single LSTM to encode the two text segments, then concatenate them and feed to a MLP to do the relation detection. • Bi-LSTM: We use two single bidirectional LSTM to encode the two text segments, then concatenate them and feed to a MLP to do the relation detection. • Word+NTN: We use the neural tensor defined in (8) to capture the semantic interaction scores between every word embedding pair, the rest of the method is the same as our proposed method. • LSTM+NTN: We use two single LSTM to generate the positional text segments representation. The rest of the method is the same as Word-NTN. • BLSTM+NTN: We use two single bidirectional LSTM to generate the positional text Table 2: Hyperparameters for our model in the experiment. Word Embedding size nw = 50 Initial learning rate ρ = 0.01 Minibatch size m = 32 Pooling Size (p, q) = (3, 3) Number of tensor slice r = 2 segments representation. The rest of the method is the same as Word-NTN. • Word+GRN: We use the gated relevance network proposed in this paper to capture the semantic interaction scores between every word embedding pair of the two text segments. The rest of the method is the same as our model. • LSTM+GRN: We use the gated relevance network proposed in this paper to capture the semantic interaction scores between every intermediate representation pair of the two text segments generated by LSTM. The rest of the method is the same as our model. 3.2.2 Parameter Setting For the initialization of the word embeddings used in our model, we use the 50-dimensional pre-trained embeddings provided by Turian et al. (2010), and the embeddings are fixed during training. We only preserve the top 10,000 words according to its frequency of occurrence in the training data, all the text segments are padded to have the same length of 50, the intermediate representations of LSTM are also set to 50. The other parameters are initialized by randomly sampling from uniform distribution in [-0.1,0.1]. For other hyperparameters of our proposed model, we take those hyperparameters that achieved best performance on the development set, and keep the same parameters for other competitors. The final hyper-parameters are show in Table 2. 3.3 Result The results on PDTB are show in Table3, from the results, we have several experiment findings. First of all, it is easy to notice that LSTM and Bi-LSTM achieve lower performance than all of the methods of using a tensor to capture the semantic interactions between word pairs and the intermediate representation pairs. Because the main 1730 Table 3: The performances of different approaches on the PDTB. Comparison Contingency Expansion Temporal (Pitler et al., 2009) 21.96% 47.13% 76.42% 16.76% (Zhou et al., 2010) 31.79% 47.16% 70.11% 20.30% (Park and Cardie, 2012) 31.32% 49.82% 79.22% 26.57% (Rutherford and Xue, 2014) 39.70% 54.42% 80.44% 28.69% (Ji and Eisenstein, 2015) 35.93% 52.78% 80.02% 27.63% LSTM 31.78% 45.39% 75.10% 19.65% Bi-LSTM 31.97% 45.66% 75.13% 20.02% Word+NTN 32.18% 46.45% 77.64% 21.60% LSTM+NTN 36.82% 50.09% 79.88% 26.54% Bi-LSTM+NTN 39.36% 53.74% 80.02% 28.41% Word+GRN 32.67% 46.52% 77.68% 21.21% LSTM+GRN 38.13% 52.25% 79.96% 27.15% Bi-LSTM+GRN 40.17% 54.76% 80.62% 31.32% disadvantage of using LSTM and Bi-LSTM to encode the text segment into a single representation is that some important local information such as key words can not be fully preserved when compressing a long sentence into a single representation. Second, the performance improves a lot when using LSTM and Bi-LSTM to encode the text segments to positional representations instead of using word representations directly. We conclude it is mainly because the following two reasons: for one thing, some words are important only when they are associated with their context, for the other, the intermediate representations are the high-level representation of the sentence at each position, there is no doubt for they can obtain much more semantic information than the words along. In addition, Bi-LSTM also takes the future information of the text segments into consideration, resulting in a consistently better performance than LSTM. Third, take a comparison to the methods using NTN and the methods using GRN, we can find that the GRN performs consistently better. Such results show that the gate we proposed to combine the information of two aspects is actually useful. At last, our proposed model, namely, BiLSTM+GRN achieves best performance on all of the relations. It not only shows the interaction between word pairs is useful, but also shows the way we proposed to capture such information is useful too. Further more, compared with the previous methods (Pitler et al., 2009; Park and Cardie, 2012; Rutherford and Xue, 2014), which used either a lot of complex textual features and contextual information about the two text segments or a larger unannotated corpus to help the prediction, our model only uses the the information of the two text segments themselves, but yet achieves better performance. It demonstrates that our model is powerful in modeling the discourse relations. 3.4 Parameter Sensitivity In this section, we evaluate our model through different settings of the proposed gated relevance network, the other hyperparameters are the same as mentioned above and we use a bidirectional LSTM to encode the text segments. The settings are shown as follows: • GRN-1 We set the parameters r = 1, M[1] = I, V = 0, b = 0 and g = 1. The model can be regarded as cosine similarity. Cmp Ctg Exp Tmp GRN-1 34.01% 50.23% 78.66% 21.33% GRN-2 35.74% 50.91% 79.40% 23.28% GRN-3 34.15% 51.78% 79.33% 26.16% GRN-4 37.18% 52.36% 79.86% 25.60% GRN-5 40.17% 54.76% 80.62% 31.32% GRN-6 38.26% 52.08% 80.02% 28.51% Table 4: Comparison of our model with different parameter settings to the gated relevance network. Cmp denotes the comparison relation, Ctg denotes the contingency relation, Exp denotes the expansion relation and Tmp denotes the temporal relation. 1731 • GRN-2 We set the parameters r = 1, V = 0, b = 0, and g = 1. The model can be regarded as the bilinear model. • GRN-3 We set the parameters r = 1, M[1] = 0, and g = 0. The model can be regarded as a single layer network. • GRN-4 We set the parameters r = 1. This model is the full GRN model. • GRN-5 We set the parameters r = 2. This model is the full GRN model. • GRN-6 We set the parameters r = 3. This model is the full GRN model. The results for different parameter settings are shown in Table 4. It is obvious that GRN-1 achieves a relatively lower performance, showing that the cosine similarity is not enough to capture the complex semantic interaction. Take a comparison on GRN-2 and GRN-3, we can see that GRN-2 outperforms GRN-3 on Comparison and Expansion relation, while achieves a lower performance on the other two relations, moreover, the combination method GRN-4 outperforms both of the methods, demonstrating that the semantic interactions captured by the bilinear model and the single layer network are different. Hence, they can not take the place of each other, and it is reasonable to use a gate to combine them. Among the methods of using the full GRN model, GRN-5 which has 2 bilinear tensor slices achieves the best performance. We explain this phenomenon on two aspects, on one hand, we can see each slice of the bilinear tensor as being responsible for one type of the relation, a bilinear tensor with 2 slices is more suitable for training a binary classifier than the original bilinear model. On the other hand, increasing the number of slices will increase the complexity of the model, thus making it harder to train. 3.5 Case Study In this section, we go back to the example mentioned above to show see what information between the text segment pairs is captured, and how the positional sentence representations affect the performance of our model. The examples is listed below: S1: Psyllium’s not a good crop. S2: You get a rain at the wrong time and the crop is ruined. You get a rain at the wrong time and the crop is ruined Psyllium is not a good crop (a) Word+GRN You get a rain at the wrong time and the crop is ruined Psyllium is not a good crop (b) Bi-LSTM+GRN Figure 3: A visualization of the interaction score matrix between two relatively complex sentences. The darker patches denote the corresponding scores generated by the gated relevance network are higher. In this case, the relation between the sentence pair is Contingency, and the implicit connective annotated by human is “because”. The pair is likely to be classified to a wrong contrast relation if we only focus on the informative word pairs (good,wrong) and (good,ruined). It is mainly because their relation is highly depended on the semantic of the whole sentence, and the words should be considered with their context. Figure 3 explains this phenomenon, in Figure 3a, we can see that the word pairs which associate with “not” get high scores, scores on the other pairs are relatively arbitrary. It demonstrates the word embedding model failed to learn which part of the sentence should be focused, although the useless word such as “Psyllium” and “a” are 1732 ignored, thus making it harder to identify the relation. Take Figure 3b for a comparison, from the figure we can observe the pairs that associate with “not” and “good” which are import context to determine the semantic of the sentence get much higher scores. Moreover, the scores increase along with the sentence encoding procedure, especially when the last informative word “ruined” appears. Once again, some useless word are also ignored by this model. It demonstrates the bidirectional LSTM we used in our model could encode the contextual information to the intermediate representations, thus these information could help to determine which part of the two sentence should be focused when identifying their relation. 4 Related Work Discourse relations, which link clauses in text, are used to represent the overall text structure. Many downstream NLP tasks such as text summarization, question answering, and textual entailment can benefit from the task. Along with the increasing requirement, many works have been constructed to automatically identify these relations from different aspects (Pitler et al., 2008; Pitler et al., 2009; Zhou et al., 2010; McKeown and Biran, 2013; Rutherford and Xue, 2014; Xue et al., 2015). For training and comparing the performance of different methods, the Penn Discourse Treebank (PDTB) 2.0, which is large annotated discourse corpuses, were released in 2008 (Prasad et al., 2008). The annotation methodology of it follows the lexically grounded, predicate-argument approach. In PDTB, the discourse relations were predefined by Webber (2004). PDTB-styled discourse relations hold in only a local contextual window, and these relations are organized hierarchically. Also, every relation in PDTB has either an explicit or an implicit marker. Since explicit relations are easy to identify (Pitler et al., 2008), existing methods achieved good performance on the relations with explicit maker. In recent years, researchers mainly focused on implicit relations. For easily comparing with other methods, in this work, we also use PDTB as the training and testing corpus. As we mentioned above, various approaches have been proposed to do the task. Pitler et al. (2009) proposed to train four binary classifiers using word pairs as well as other rich linguistic features to automatically identify the top-level PDTB relations. Park and Cardie (2012) achieved a higher performance by optimizing the feature set. McKeown and Biran (2013) aims at solving the data sparsity problem, and they extended the work of Pitler et al. (2009) by aggregating word pairs. Rutherford and Xue (2014) used Brown clusters and coreferential patterns as new features and improved the baseline a lot. Braud and Denis (2015) compared different word representations for implicit relation classification. The word pairs feature have been studied by all of the work above, showing its importance on discourse relation. We follow their work, and incorporate word embedding to deal with this problem. There also exist some work performing this task from other perspectives. Zhou et al. (2010) studied the problem from predicting implicit marker. They used a language model to add implicit markers as an additional feature to improve performance. Their approach can be seen as a semisupervised method. Ji and Eisenstein (2015) computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. Chen et al. (2016) used vector offsets to represent this relation between sentence pairs, and aggregate this offsets through the Fisher vector. Liu et al. (2016) used a a mutil-task deep learning framework to deal with this problem, they incorporate other similar corpus to deal with the data sparsity problem. Most of the previous works mentioned above used rich linguistic features and supervised learning methods to achieve the task. In this paper, we propose a deep architecture, which does not need these manually selected features and additional linguistic knowledge base to do it. 5 Conclusion In this work, we propose to use word embeddings to fight against the data sparsity problem of word pairs. In order to preserve contextual information, we encode a sentence to its positional representation via a recurrent neural network, specifically, a LSTM. To solve the semantic gap between the word pairs, we propose to use a gated relevance network which incorporates both the linear and nonlinear interactions between pairs. Experiment results on PDTB show the proposed model outperforms the existing methods using traditional fea1733 tures on all of the relations. Acknowledgement The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No. 61532011, 61473092, and 61472088), the National High Technology Research and Development Program of China (No. 2015AA015408). References Yoshua Bengio, Holger Schwenk, Jean-S´ebastien Sen´ecal, Fr´ederic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning, pages 137–186. Springer. Chlo´e Braud and Pascal Denis. 2015. Comparing word representations for implicit discourse relation classification. In Empirical Methods in Natural Language Processing (EMNLP 2015). Jifan Chen, Qi Zhang, Pengfei Liu, and Xuanjing Huang. 2016. Discourse relations detection via a mixed generative-discriminative framework. In AAAI. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. Rodolphe Jenatton, Nicolas L Roux, Antoine Bordes, and Guillaume R Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems, pages 3167–3175. Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Transactions of the Association for Computational Linguistics, 3:329–344. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A pdtb-styled end-to-end discourse parser. Natural Language Engineering, 20(02):151–184. Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. In AAAI. Kathleen McKeown and Or Biran. 2013. Aggregated word pair features for implicit discourse relation disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 69–73. The Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Joonsuk Park and Claire Cardie. 2012. Improving implicit discourse relation recognition through feature set optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 108–112. Association for Computational Linguistics. Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind K Joshi. 2008. Easily identifiable discourse relations. Technical Reports (CIS), page 884. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 683–691. Association for Computational Linguistics. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer. Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community-based question answering. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 1305–1311. Attapol Rutherford and Nianwen Xue. 2014. Discovering implicit discourse relations through brown cluster pair representation and coreference patterns. In EACL, volume 645, page 2014. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673–2681. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926–934. 1734 Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 149–156. Association for Computational Linguistics. Ilya Sutskever, Joshua B Tenenbaum, and Ruslan R Salakhutdinov. 2009. Modelling relational data using bayesian clustered tensor factorization. In Advances in neural information processing systems, pages 1821–1828. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394. Association for Computational Linguistics. Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2015. A deep architecture for semantic matching with multiple positional sentence representations. arXiv preprint arXiv:1511.08277. Bonnie Webber. 2004. D-ltag: Extending lexicalized tag to discourse. Cognitive Science, 28(5):751–779. Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi PrasadO Christopher Bryant, and Attapol T Rutherford. 2015. The conll-2015 shared task on shallow discourse parsing. In Proceedings of CoNLL, page 2. Rong Zhao and William I Grosky. 2002. Narrowing the semantic gap-improved text-based web document retrieval using visual features. Multimedia, IEEE Transactions on, 4(2):189–200. Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recognition. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1507–1514. Association for Computational Linguistics. 1735
2016
163
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1736–1745, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Model Architectures for Quotation Detection Christian Scheible, Roman Klinger and Sebastian Pad´o Institut f¨ur Maschinelle Sprachverarbeitung Universit¨at Stuttgart {scheibcn,klinger,pado}@ims.uni-stuttgart.de Abstract Quotation detection is the task of locating spans of quoted speech in text. The state of the art treats this problem as a sequence labeling task and employs linear-chain conditional random fields. We question the efficacy of this choice: The Markov assumption in the model prohibits it from making joint decisions about the begin, end, and internal context of a quotation. We perform an extensive analysis with two new model architectures. We find that (a), simple boundary classification combined with a greedy prediction strategy is competitive with the state of the art; (b), a semi-Markov model significantly outperforms all others, by relaxing the Markov assumption. 1 Introduction Quotations are occurrences of reported speech, thought, and writing in text. They play an important role in computational linguistics and digital humanities, providing evidence for, e.g., speaker relationships (Elson et al., 2010), inter-speaker sentiment (Nalisnick and Baird, 2013) or politeness (Faruqui and Pado, 2012). Due to a lack of generalpurpose automatic systems, such information is often obtained through manual annotation (e.g., Agarwal et al. (2012)), which is labor-intensive and costly. Thus, models for automatic quotation detection form a growing research area (e.g., Pouliquen et al. (2007); Pareti et al. (2013)). Quotation detection looks deceptively simple, but is challenging, as the following example shows: [The pipeline], the company said, [would be built by a proposed joint venture ..., and Trunkline . . . will “build and operate” the system . . . ].1 1Penn Attributions Relation Corpus (PARC), wsj 0260 Note that quotations can (i) be signalled by lexical cues (e.g., communication verbs) without quotation marks, (ii) contain misleading quotation marks; (iii) be discontinuous, and (iv) be arbitrarily long. Early approaches to quotation detection use hand-crafted rules based on syntactic markers (Pouliquen et al., 2007; Krestel et al., 2008). While yielding high precision, they suffered from low recall. The state of the art (Pareti et al., 2013; Pareti, 2015) treats the task as a sequence classification problem and uses a linear-chain conditional random field (CRF). This approach works well for the prediction of the approximate location of quotations, but yields a lower performance detecting their exact span. In this paper, we show that linear-chain sequence models are a sub-optimal choice for this task. The main reason is their length, as remarked above: Most sequence labeling tasks in NLP (such as most cases of named entity recognition) deal with spans of a few tokens. In contrast, the median quotation length on the Penn Attributions Relation Corpus (PARC, Pareti et al. (2013)) is 16 tokens and the longest span has over 100 tokens. As a result of the strong Markov assumptions that linear-chain CRFs make to ensure tractability, they cannot capture “global” properties of (almost all) quotations and are unable to make joint decisions about the begin point, end point, and content of quotations. As our first main contribution in this paper, we propose two novel model architectures designed to investigate this claim. The first is simpler than the CRF. It uses token-level classifiers to predict quotation boundaries and combines the boundaries greedily to predict spans. The second model is more expressive. It is a semi-Markov sequence model which relaxes the Markov assumption, enabling it to consider global features of quotation spans. In our second main contribution, an analysis of the models’ performances, we find that the sim1736 pler model is competitive with the state-of-the-art CRF. The semi-Markov model outperforms both of them significantly by 3 % F1. This demonstrates that the relaxed Markov assumptions help improve performance. Our final contribution is to make implementations of all models publicly available.2 2 The Task: Quotation Detection Problem Definition Following the terminology established by Pareti et al. (2013), we deal with the detection of content spans, the parts of the text that are being quoted. To locate such spans, it is helpful to first detect cues which often mark the beginning or end of a quotation. The following example shows an annotated sentence from the PARC corpus; each content span (CONT) is associated with exactly one cue span (CUE): Mr. Kaye [denies]CUE [the suit’s charges]CONT and [says]CUE [his only mistake was taking on Sony in the marketplace]CONT.3 Pareti et al. (2013) distinguish three types of quotations. Direct quotations are fully enclosed in quotation marks and are a verbatim reproduction of the original utterance. Indirect quotations paraphrase the original utterance and have no quotation marks. Mixed quotations contain both verbatim and paraphrase content and may thus contain quotation marks. Note that the type of a content span is assigned automatically based on its surface form using the definitions just given. Quotation Detection as Sequence Modeling In this paper, we compare our new model architectures to the state-of-the-art approach by Pareti (2015), an extension of Pareti et al. (2013). Their system is a pipeline: Its first component is the cue model, a token-level k-NN classifier applied to the syntactic heads of all verb groups. After cues are detected, content spans are localized using the content model, a linear-chain conditional random field (CRF) which makes use of the location of cues in the document through features. As their system is not publicly available, we reimplement it. Our cue classifier is an averaged perceptron (Collins, 2002) which we describe in more detail in the following section. It uses the 2http://www.ims.uni-stuttgart.de/data/qsample 3PARC, wsj 2418 C1. Surface form, lemma, and PoS tag for all tokens within a window of ±5. C2. Bigrams of surface form, lemma, and PoS tag C3. Shape of ti C4. Is any token in a window of ±5 a named entity? C5. Does a quotation mark open or close at ti (determined by counting)? Is ti within quotation marks? C6. Is ti in the list of reporting verbs, noun cue verbs, titles, WordNet persons or organizations, and its VerbNet class C7. Do a sentence, paragraph, or the document begin or end at ti, ti−1, or ti+1? C8. Distance to sentence begin and end; sentence length C9. Does the sentence contain ti a pronoun/named entity/quotation mark? C10. Does a syntactic constituent starts or ends at ti? C11. Level of ti in the constituent tree C12. Label and level of the highest constituent in the tree starting at ti; label of ti’s the parent node C13. Dependency relation with parent or any child of ti (with and without parent surface form) C14. Any conjunction of C5, C9, C10 Table 1: Cue detection features for a token ti at position i, mostly derived from Pareti (2015) S1. Is a direct or indirect dependency parent of ti classified as a cue, in the cue list, or the phrase “according to”? S2. Was any token in a window of ±5 classified as a cue? S3. Distance to the previous and next cue S4. Does the sentence containing ti have a cue? S5. Conjunction of S4 and all features from C14 Table 2: Additional features for content span detection, mostly derived from Pareti (2015) features in Table 1.4 Our content model is a CRF with BIOE labels. It uses all features from Table 1 plus features that build on the output of the cue classifier, shown in Table 2. 3 New Model Architectures While Pareti (2015) apply sequence modeling for quotation detection, they do not provide an analysis what the model learns. In this paper, we follow the intuition that a linear-chain CRF mostly makes local decisions about spans, while ignoring their global structure, such as joint information about the context of the begin and end points. If this is true, then (a) a model might work as well as the CRF without learning from label sequences, and (b) a model which makes joint decisions with global information might improve over the CRF. This motivates our two new model architectures for the task. We illustrate the way the different architectures make use of information in Figure 1. Our simpler model (GREEDY) makes strictly local classification decisions, completely ignoring 4For replicability, we give more detailed definitions of the features in the supplementary notes. 1737 ti CRF: GREEDY: SEMIMARKOV: ti+1 ti+2 ti+3 ti+4 ti+5 ti+6 ti ti+1 ti+2 ti+3 ti+4 ti+5 ti+6 ti ti+1 ti+2 ti+3 ti+4 ti+5 ti+6 Figure 1: Information usage by model architecture. Frames indicate joint decisions on token labels. cue classifier begin classifier end classifier greedy combination (GREEDY) semi-Markov model (SEMIMARKOV) cue detection boundary detection span detection linear-chain conditional random field (Pareti et al. 2013, Pareti 2015) Figure 2: Information flow in all three models those around it. The CRF is able to coordinate decisions within a window, which is propagated through Viterbi decoding. The more powerful model (SEMIMARKOV) takes the full span into account and makes a joint decision about the begin and end points. Our intuition about the shortcomings of the CRF is based on an empirical analysis. However, to simplify the presentation, we postpone the presentation of this analysis to Section 6 where we can discuss and compare the results of all three models. 3.1 Model Decomposition and Formalization We first introduce a common formalization for our model descriptions. Our problem of interest is content span detection, the task of predicting a set S of content spans (tb, te) delimited by their begin and end tokens. The CRF solves this task by classifying tokens as begin/end/inside/outside tokens and thus solves a proxy problem. The problem is difficult because corresponding begin and end points need to be matched up over long distances, a challenge for probabilistic finite state automata such as CRFs. In our model, cue detection, the task of detecting cue tokens tc (cf. Section 2), remains the first step. However, we then decompose the content span problem solved by the CRF by introducing the intermediary task of boundary detection. As illustrated in Figure 2, this means identifying the sets of all begin and end tokens, tb and te, ignoring their interdependencies. We then recombine these Algorithm 1 GREEDY content span algorithm Input: List of documents D; feature functions f x for cue, begin, and end (x ∈c, b, e); distance parameter dmax; length parameter ℓmax Output: Content span labeling S 1: θc, θb, θe ←TRAINCLASSIFIERS(D, f c, f b, f e) 2: for d in D do 3: S ←∅ 4: for token t in d do 5: if θc · f c(t) > 0 then 6: tb ←next token right of t ▷next begin where θb · f b(t) > 0 7: te ←next token right of tb ▷next end where θe · f e(t) > 0 8: if |tb −tc| ≤dmax and |te −tb| ≤ℓmax and OVERLAPPING(tb, te) = ∅ then 9: S ←S ∪{(tb, te)} ▷add span predictions with two different strategies, as detailed in Section 3.2 and Section 3.3. This decomposition has two advantages: (a), we expect that boundary detection is easier than content span detection, as we remove the combinatorial complexity of matching begin and end tokens; (b), begin, end, and cue detection are now three identical classification tasks that can be solved by the same machinery. We model each of the three tasks (cue/begin/end detection) with a linear classifier of the form scorex(t) = θx · f x(t) (1) for a token t, a class x ∈{c, b, e} (for cue, begin, and end), a feature extraction function f x(t), and a weight vector θx. We re-use the feature templates from Section 2 to remain comparable to the CRF. We estimate all parameters θx with the perceptron algorithm, and use parameter averaging (Collins, 2002). Since class imbalances, which occur in the boundary detection tasks, can have strong effects (Barandela et al., 2003), we train the perceptron with uneven margins (Li et al., 2002). This variant introduces two learning margins: τ−1 for the negative class and τ+1 for the positive class. Increasing τ+1 at a constant τ−1 increases recall (as failure to predict this class is punished more), potentially at the loss of precision, and vice versa. 3.2 Greedy Span Detection Our first new model, GREEDY (Figure 2, bottom center), builds on the assumption that the modeling of sequence properties in a linear-chain CRF is weak enough that sequence learning can be replaced by a greedy procedure. Algorithm 1 shows how we generate a span labeling based on the output of the boundary classifiers. Starting at each cue, 1738 we add all spans within a given distance dmax from the cue whose length is below a given maximum ℓmax. If the candidate span is OVERLAPPING with any existing spans, we discard it. Analogously, we search for spans to the left of the cue. The algorithm is motivated by the structure of attribution relations: each content span has one associated cue. 3.3 Semi-Markov Span Detection Our second model extends the CRF into a semi-Markov architecture which is able to handle global features of quotation span candidates (SEMIMARKOV, Figure 2 bottom right). Following previous work (Sarawagi and Cohen, 2004), we relax the Markov assumption inside spans. This allows for extracting arbitrary features on each span, such as conjunctions of features on the begin and end tokens or occurrence counts within the span. Unfortunately, the more powerful model architecture comes at the cost of a more difficult prediction problem. Sarawagi and Cohen (2004) propose a variant of the Viterbi algorithm. This however does not scale to our application, since the maximum length of a span factors into the prediction runtime, and quotations can be arbitrarily long. As an alternative, we propose a sampling-based approach: we draw candidate spans (proposals) from an informed, non-uniform distribution of spans. We score these spans to decide whether they should be added to the document (accepted) or not (rejected). This way, we efficiently traverse the space of potential span assignments while still being able to make informed decisions (cf. Wick et al. (2011)). To obtain a distribution over spans, we adapt the approach by Zhang et al. (2015). We introduce two independent probability distributions: Pb is the distribution of probabilities of a token being a begin token; Pe is the distribution of probabilities of a token being an end token. We sample a single content span proposal (DRAWPROPOSAL) by first sampling the order in which the boundaries are to be determined (begin token or end token first) by sampling a binary variable d ∼Bernoulli(0.5). If the begin token is to be sampled first, we continue by drawing a begin token tb ∼Pb and finally draw an end token te ∼Pe within a window of up to ℓmax tokens to the right of tb. If the end token is to be sampled first, we proceed conversely. We also propose empty spans, i.e., the removal of existing spans without an replacement. For the distributions Pb and Pe, we reuse our Algorithm 2 SEMIMARKOV inference algorithm Input: Document d; probability distributions for begin and end (Pb, Pe); feature function for spans g; maximum span length ℓmax; number of proposals N Output: Set of content spans S 1: S ←∅ 2: θ ←0 3: for n = 1 to N do 4: (tb, te) ←DRAWPROPOSAL(Pb, Pe) 5: score ←θ · g(tb, te) 6: O ←OVERLAPPING(tb, te) 7: scoreO ←P (t′ b,t′e)∈O θ · g(t′ b, t′ e) 8: if score > scoreO then 9: S ←S \ O ▷remove overlapping 10: S ←S ∪{(tb, te)} ▷accept proposal 11: if ISTRAINING and ¬CORRECT(tb, te) then 12: PERCEPTRONUPDATE ▷wrongly accepted 13: else 14: REJECT(tb, te) 15: if ISTRAINING and CORRECT(tb, te) then 16: PERCEPTRONUPDATE ▷wrongly rejected boundary detection models from Section 3.1. For each class x ∈{b, e} we form a distribution Px(t) ∝exp(scorex(t)/Tx) (2) over the tokens t of a document using the scores from Equation 1. Tx is a temperature hyperparameter. Temperature controls the pronouncedness of peaks in the distribution. Higher temperature flattens the distribution and encourages the selection of tokens with lower scores. This is useful when exploration of the sample space is desired. The proposed candidates enter into the decision algorithm shown in Algorithm 2. As shown, the candidates are scored using a linear model (again as defined in Equation 1). We use the features of the previous models (Table 1 and 2) on the begin and end tokens. As we now judge complete span assignments rather than local label assignments to tokens, we can add a new span-global feature function g(tb, te). We introduce the features shown in Table 3. If the candidate’s score is higher than the sum of scores of all spans overlapping with it, we accept it and remove all overlapping ones. This model architecture can be seen as a modification of the pipeline of the GREEDY model (cf. Figure 2). We again detect cues and boundaries, but then make an informed decision for combining begin and end candidates. In addition, the sampler makes “soft” selections of begin and end tokens based on the model scores rather than simply accepting the classifier decisions. For training, we again use perceptron updates (cf. Section 3.2). If the model accepts a wrong 1739 Setting Direct Indirect Mixed Overall P R F P R F P R F P R F strict Pareti (2015) as reported therein 94 88 91 78 56 65 67 60 63 80 63 71 CRF (own re-implementation) 94 93 94 73 58 64 81 68 74 79g 67 72 GREEDY 92 91 91 69 59 64 72 64 68 75 67 71 SEMIMARKOV 93 94 94 73 65 69 81 66 73 79g 71c g 75c g Combination: CRF+SEMIMARKOV 94 93 94 73 64 69 81 68 74 79g 71c g 75c g partial Pareti (2015) as reported therein 99 93 96 91 66 77 91 81 86 93 73 82 CRF (own re-implementation) 98 96 97 87 70 77 94 83 88 90g 77 83 GREEDY 97 95 96 83 76 79 93 85 89 88 81c 84 SEMIMARKOV 97 95 96 83 75 79 92 81 86 88 80 84 Combination: CRF+SEMIMARKOV 98 96 97 83 75 79 94 83 88 88 81c 84c Table 4: Results on the test set of PARC3. Best overall strict results in bold. Models as in Figure 2. g: significantly better than GREEDY; c: significantly better than CRF (both with α = 0.05). G1. Numbers of named entities, lowercased tokens, commas, and pronouns inside the span G2. Binned percentage of tokens that depend on a cue G3. Location of the closest cue (left/right?), percentage of dependents on that cue G4. Number of cues overlapped by the span G5. Is there a cue before the first token and/or after the last token of the span (within the same sentence)? first or after the last token of the span?, and their conjunction G6. Do both the first and the last token depend on a cue? G7. Binned length of the span G8. Does the span match a sentence exactly/off by one token? G9. Number of sentences covered by the span G10. Does the span match one or more constituents exactly? G11. Is the span direct, indirect, or mixed? G12. Is the # of quotation marks in the span odd or even? G13. Is the span is direct and does it contain more than two quotation marks? Table 3: Global features for content span detection span, we perform a negative update (Line 12 in Algorithm 2). If a correct span is rejected, we make a positive update (Line 16). We iterate over the documents in random order for a fixed number E of epochs. As the sampling procedure takes long to fully label documents, we employ GREEDY to make initial assignments. This does not constitute additional supervision, as the sampler can remove any initial span and thus refute the initialization. This reduces runtime without affecting the result in practice. 4 Experimental Setup Data We use the Penn Attribution Relations Corpus, version 3 (henceforth PARC3), by Pareti (2015).5 It contains AR annotations on the Wall Street Journal part of the Penn Treebank (2,294 5Note that the data and thus the results differ from those previously published in (Pareti et al., 2013). news documents). As in related work, we use sections 1–22 as training set, section 23 as test set, and section 24 as development set. We perform the same preprocessing as Pareti: We use gold tokenization, lemmatization, part-of-speech tags, constituency parses, gold named entity annotations (Weischedel and Brunstein, 2005), and Stanford parser dependency analyses (Manning et al., 2014). Evaluation We report precision, recall, and micro-averaged F1, adopting the two metrics introduced by Pareti et al. (2013): Strict match considers cases as correct where the boundaries of the spans match exactly. Partial match measures correctness as the ratio of overlap of the predicted and true spans. In both cases, we report numbers for each of the three quotation types (direct, indirect, mixed) and their micro averages. Like Pareti (2015), we exclude single-token content spans from the evaluation. To test for statistical significance of differences, we use the approximate randomization test (Noreen, 1989) at a significance level of α = 0.05. Implementation and Hyperparameters We use the CRF implementation in MALLET (McCallum, 2002). We optimize all hyperparameters of the models on the development set. Our best models use positive margins of τ+ = 25 for the boundary and τ+ = 15 for the span models, favoring recall. The SEMIMARKOV sampler uses a temperature of Tx = 10 for all classes. We perform 15 epochs of training after which the models have converged, and draw 1,000 samples for each document. For the GREEDY model, we obtain the best results with dmax = 30 and ℓmax = 55. For the SEMIMARKOV sampler, ℓmax = 75 is optimal. 1740 The high values mirror the presence of very long spans in the data. 5 Results Cue We first evaluate the cue classifier. We obtain an F1 of 86 %, with both precision and recall at 86 %, which is very close to the 85 % F1 of Pareti. CRF Table 4 summarizes the content span results. First, we compare Pareti’s results to our reimplementation (the rows denoted with Pareti (2015) and CRF). There are some differences in how well the model performs on certain types of spans: while our precision is lower for indirect spans, it is higher on mixed spans. Additionally, our implementation generally has higher recall than Pareti’s. Her system includes several features using proprietary lists (such as a manually curated list of titles) we were unable to obtain, and complex feature templates that we may interpret differently. We suspect that these differences are due to the typical replication problems in NLP (cf. Fokkens et al. (2013)). Overall, however, our model performs quite similarly to Pareti’s, with our model scoring an overall F1 of 72 % (vs. Pareti’s 71 %) and a partial F1 of 83 % (vs. 82 %). GREEDY Next, we compare the GREEDY model to the CRF. We find its overall performance to be comparable to the CRF, confirming our expectations. While strict precision is statistically significantly lower for GREEDY (75 % vs. 79 %), strict recall is not significantly different (bot at 67 %). Considering partial matches, GREEDY has significantly higher recall (81 % vs. 77 %) but significantly lower precision (88 % vs. 90 %) than the CRF, with an overall comparable F1. This result bolsters our hypothesis that the CRF learn only a small amount of useful sequence information. Although GREEDY ignores label sequences in training completely, it is able to compete with the CRF. Furthermore, the partial match result that GREEDY is a particularly good choice if the main interest is the approximate location of content spans in a document: The simpler model architecture makes it easier and more efficient to train and apply. The caveat is that GREEDY is particularly bad at locating mixed spans (as indicated by a precision of only 72 %): Quotation marks are generally good indicators for span boundaries and are often returned as false positives by the boundary detection models, so GREEDY tends to incorrectly pick them. SEMIMARKOV Overall, the SEMIMARKOV model outperforms the CRF significantly in terms of strict recall (71 % vs. 67 %) and F1 (75 % vs. 72 %), while precision remains unaffected (at 79 %). The model performs particularly well on indirect quotations (increasing F1 by 5 points to 69 %), the most difficult category, where local context is insufficient. Meanwhile, on partial match, the SEMIMARKOV model has a comparable recall (80 vs. 77 %), but significantly lower precision (88 % vs. 90 %). The overall partial F1 results are not significantly different. The improvement on the strict measures supports our intuition that better features help in particular in identifying the exact boundaries of quotations, a task that evidently profits from global information. Model Combination The complementary strengths of the CRF and SEMIMARKOV (CRF detects direct quotations well, SEMIMARKOV indirect quotations) suggest a simple model combination algorithm based on the surface form of the spans: First take all direct and mixed spans predicted by the CRF; then add all indirect spans from the SEMIMARKOV model (except for those which would overlap). This result is our overall best model under strict evaluation, although it is not significantly better than the SEMIMARKOV model. Considering partial match, its results are essentially identical to the SEMIMARKOV model. 6 Analysis We now proceed to a more detailed analysis of the performance of the three models (CRF, GREEDY, and SEMIMARKOV) and their differences in order to gain insights into the nature of the quotation detection task. In the interest of readability, we organize this section by major findings instead of the actual analyses that we have performed, and adduce for each finding all relevant analysis results. Finding 1: Variation in length does not explain the differences in model performance. A possible intuition about our models it that the improvement of SEMIMARKOV over CRF is due to a better handling of longer quotations. However, this is not the case. Figure 3 shows the recall of the three models for quotations binned by lengths. The main patterns hold across all three models: Mediumlength spans are the easiest to detect. Short spans are difficult to detect as they are often part of discontinuous content spans. Long spans are also 1741 [0,10) [10,20) [20,30) [30,40) [40,50) [50,100) Span length Recall 0.0 0.2 0.4 0.6 0.8 [0,10) [10,20) [20,30) [30,40) [40,50) [50,100) Span length Recall 0.0 0.2 0.4 0.6 0.8 [0,10) [10,20) [20,30) [30,40) [40,50) [50,100) Span length Recall 0.0 0.2 0.4 0.6 0.8 Figure 3: Strict recall by span length for CRF (left), GREEDY (center), and SEMIMARKOV model (right) Category Count B I E looking left 27 14 7 looking right 1 13 30 cue 11 10 7 other lexical 31 21 22 structural/syntactic 27 44 35 punctuation 31 25 36 Table 5: Categories of top positive and negative CRF features for begin (B), inside (I), and end (E) difficult since any wrong intermediary decision can falsify the prediction. In fact, the CRF model is even the best model among the three for very long spans (which are rare). Those spans exceed the 55 and 75 token limits ℓmax of the GREEDY and SEMIMARKOV models. Intuitively, for the CRF, most spans are long: even spans which are short in comparison to other quotations are longer than the window within which the CRF operates. This is why span length does not have an influence. Finding 2: Quotations are mostly defined by their immediate external context. A feature analysis of the CRF model reveals that many important features refer to material outside the quotation itself. For each label (B, I, E), we collect the 50 features with the highest positive and negative values, respectively. We first identify the subset of those features that looks look left or right. As the upper part of Table 5 shows, a substantial number of B (begin) features look to the left, and a number of E (end) features look to the right. Thus, these features do not look at the quotation itself, but at its immediate external context. We next divide the features into four broad categories (cues, other lexical information, structural and syntactic features, and punctuation including quotation marks). The results in the lower part of Table 5 show that the begin and end classes rely on a range of categories, including lexical, cue and punctuation outside the quotation. The situation is different for inside tokens (I), where most features express structural and syntactic properties of the quotation such as the length of a sentence and its syntactic relation to a cue. Together, these observations suggest that one crucial piece of information about quotations is their lexical and orthographic context: the factors that mark a quotation as a quotation. Another crucial piece are internal structural properties of the quotation, while lexical properties of the quotation are not very important: which makes sense, since almost anything can be quoted. The feature analysis is bolstered by an error analysis of the false negatives in the high-precision low-recall CRF. The first reason for false negatives is indeed the occurrence of infrequent cues which the cue model fails to identify (e.g., read or acknowledge). The second one is that the model does attempt to learn syntactic features, but that the structural features that can be learned by the CRF (such as C7, C10 or S4) can model only local windows of the quality of the quotation, but not its global quality. This leads us to our third finding. Finding 3: Simple models cannot capture dependencies between begin and end boundaries well. Given the importance of cues, as evidenced by our Finding 2, we can ask whether the boundary of the quotation that is adjacent to its associated cue (“cue-near”) is easier to identify than the other boundary (“cue-far”) whose context is less informative. To assess this question, we evaluate the recall of individual boundary detection at the token level. For the CRF, “cue-far” boundaries of spans indeed tend to be more difficult to detect than “cue-near” ones. The results in Table 6 show that both the GREEDY and the CRF model show a marked asym1742 GREEDY CRF SEMIMARKOV cue-near 76 74 76 cue-far 72 71 75 Table 6: Recall on boundaries by cue position metry and perform considerably worse (3 % and 4 %, respectively) on the cue-far boundary. This asymmetry is considerably weaker for the SEMIMARKOV model, where both boundary types are recognized almost on par. The reason behind this finding is that neither the GREEDY model nor the CRF can condition the choice of the cue-far boundary on the cue-near boundary or on global properties of the quotation – the GREEDY model, because its choices are completely independent, and the CRF model, because its choices are largely independent due to the Markov assumption. Finding 4: The SEMIMARKOV model benefits the most from its ability to handle global features about content spans. This leads us to our final finding about why the SEMIMARKOV model outperforms the CRF – whether it is the model architecture itself, or the new global features that it allows us to formulate. We perform an ablation study whose results are shown in Figure 4. We begin with only the token-level features on the begin, end, and interior tokens of the span, as introduced in Section 2, i.e., the features that the CRF has at its disposal. We find that this model performs on par with the CRF, thus the model architecture on its own does not help. We then incrementally add the feature templates containing count statistics of the internal tokens (Template G1 in Table 3) and advanced cue information (G2–G6). Both give the model incremental boosts. Adding syntactic coherence features (G7–G13) completes our full feature set and yields the best results. Thus, the difference comes from features that describe global properties of the quotation. One of the most informative (negative) features is the conjunction from G6. It enforces the constraint that each content span is associated with a single cue. As in the CRF, the actual content of a content span does not play a large role. The only semantic features the model considers concern the presence of named entities within the span. These observations are completed by analysis of the quotation spans that were correctly detected by the SEMIMARKOV model, but not the CRF (in token +internal +cue +structural Feature set F 0.60 0.65 0.70 0.75 * * Figure 4: Strict F1 for different feature sets in the SEMIMARKOV model. *: Difference statistically significant. Dashed line: CRF result. terms of strict recall). We find a large amount of spans with highly ambiguous cue-near tokens such as to (10 % of the cases) that (16 %). We find that often the errors are also related to the frequency or location of cues. As an example, in the sentence [...] he has said [that when he was on the winning side in the 1960s, he knew that the tables might turn in the future]CONT.6 the CRF model incorrectly splits the content span at the second cue candidate knew. This is, however, an embedded quotation that the model should ignore. In contrast, the SEMIMARKOV model makes use of the fact the tokens of the span depend on the same cue, and predicts the span correctly. For these tokens, the distinction between reported speech and factual descriptions is difficult. Arguably, it is the global features that help the model make its call. 7 Related Work Quotation detection has been tackled with a number of different strategies. Pouliquen et al. (2007) use a small set of rules which has high precision but low recall on multilingual text. Krestel et al. (2008) also pursue a rule-based approach, focusing on the roles of cue verbs and syntactic markers. They evaluate on a small set of annotated WSJ documents and again report high precision but low recall. Pareti et al. (2013) develop the state-of-the-art sequence labeling approach discussed in this paper. Our sampling approach builds on that of Zhang et al. (2015), who pursue a similar strategy for parsing, PoS tagging, and sentence segmentation. Similar semi-Markov model approaches have been used for other applications, e.g. by Yang and Cardie 6PARC, wsj 2347 1743 (2012) and Klinger and Cimiano (2013) for sentiment analysis. They also predict spans by sampling, but they draw proposals based on the token or syntactic level. This is not suitable for quotation detection as we deal with much longer spans. 8 Conclusion We have considered the task of quotation detection, starting from the hypothesis that linear-chain CRFs cannot take advantage of all available sequence information due to its Markov assumptions. Indeed, our analyses find that the features most important to recognize a quotation consider its direct context of orthographic evidence (such as quotation marks) and lexical evidence (such as cue words). A simple, greedy algorithm using non-sequential models of quotation boundaries rivals the CRF’s performance. For further improvements, we introduce a semi-Markov model capable of taking into account global information about the complete span not available to a linear-chain CRF, such as the presence of cues on both sides of the quotation candidate. This leads to a significant improvement of 3 points F1 over the state of the art. On a more general level, we believe that quotation detection is interesting as a representative of tasks involving long sequences, where Markov assumptions become inappropriate. Other examples of such tasks include the identification of chemical compound names (Krallinger et al., 2015) and the detection of annotator rationales (Zaidan and Eisner, 2008). We have shown that a more expressive semi-Markov model which avoids these assumptions can improve performance. More expressive models however come with harder inference problems which are compounded when applied to longsequence tasks. The informed sampling algorithm we have described performs such efficient inference for our semi-Markov quotation detection model. Acknowledgments This work was funded in part by the DFG through the Sonderforschungsbereich 732. We thank Silvia Pareti for kindly providing the PARC dataset as well as for much information helpful for replicating her results. Further thanks go to Anders Bj¨orkelund and Kyle Richardson for discussion and comments. References Apoorv Agarwal, Augusto Corvalan, Jacob Jensen, and Owen Rambow. 2012. Social network analysis of alice in wonderland. In Proceedings of the NAACLHLT 2012 Workshop on Computational Linguistics for Literature, pages 88–96, Montr´eal, Canada, June. Association for Computational Linguistics. Ricardo Barandela, Jos´e Salvador S´anchez, Vicente Garc´ıa, and Edgar Rangel. 2003. Strategies for learning in class imbalance problems. Pattern Recognition, 36(3):849–851. Michael Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1–8, Philadelphia, PA. David Elson, Nicholas Dames, and Kathleen McKeown. 2010. Extracting social networks from literary fiction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 138–147, Uppsala, Sweden. Manaal Faruqui and Sebastian Pado. 2012. Towards a model of formal and informal address in English. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 623–633, Avignon, France. Antske Fokkens, Marieke van Erp, Marten Postma, Ted Pedersen, Piek Vossen, and Nuno Freire. 2013. Offspring from reproduction problems: What replication failure teaches us. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1691–1701, Sofia, Bulgaria. Roman Klinger and Philipp Cimiano. 2013. Bidirectional inter-dependencies of subjective expressions and targets and their value for a joint model. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 848– 854, Sofia, Bulgaria. Martin Krallinger, Florian Leitner, Obdulia Rabal, Miguel Vazquez, Julen Oyarzabal, and Alfonso Valencia. 2015. CHEMDNER: The drugs and chemical names extraction challenge. Journal of Cheminformatics, 7(Suppl 1):S1. Ralf Krestel, Sabine Bergler, and Ren´e Witte. 2008. Minding the source: Automatic tagging of reported speech in newspaper articles. In Proceedings of the International Conference on Language Resources and Evaluation, pages 2823–2828, Marrakech, Morocco. Yaoyong Li, Hugo Zaragoza, Ralf Herbrich, John Shawe-Taylor, and Jaz S. Kandola. 2002. The perceptron algorithm with uneven margins. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 379–386, Sydney, Australia. 1744 Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of ACL System Demonstrations, pages 55–60, Baltimore, MD. Andrew K. McCallum, 2002. MALLET: A Machine Learning for Language Toolkit. User’s manual. Eric T. Nalisnick and Henry S. Baird. 2013. Characterto-character sentiment analysis in shakespeare’s plays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 479–483, Sofia, Bulgaria, August. Association for Computational Linguistics. Eric W. Noreen. 1989. Computer intensive methods for hypothesis testing: An introduction. Wiley, New York. Silvia Pareti, Tim O’Keefe, Ioannis Konstas, James R. Curran, and Irena Koprinska. 2013. Automatically detecting and attributing indirect quotations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 989– 999, Seattle, WA. Silvia Pareti. 2015. Attribution: A Computational Approach. Ph.D. thesis, University of Edinburgh. Bruno Pouliquen, Ralf Steinberger, and Clive Best. 2007. Automatic detection of quotations in multilingual news. In Proceedings of Recent Advances in Natural Language Processing, pages 487–492, Borovets, Bulgaria. Sunita Sarawagi and William W. Cohen. 2004. Semimarkov conditional random fields for information extraction. In Proceedings of Advances in Neural Information Processing Systems, pages 1185–1192, Vancouver, BC. Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Linguistic Data Consortium, Philadelphia. Michael Wick, Khashayar Rohanimanesh, Kedar Bellare, Aron Culotta, and Andrew McCallum. 2011. Samplerank: Training factor graphs with atomic gradients. In Proceedings of the 28th International Conference on Machine Learning, pages 777–784, Bellevue, WA. Bishan Yang and Claire Cardie. 2012. Extracting opinion expressions with semi-Markov conditional random fields. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1335–1345, Jeju Island, South Korea. Omar Zaidan and Jason Eisner. 2008. Modeling annotators: A generative approach to learning from annotator rationales. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 31–40, Honolulu, HI. Yuan Zhang, Chengtao Li, Regina Barzilay, and Kareem Darwish. 2015. Randomized greedy inference for joint segmentation, POS tagging and dependency parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 42–52, Denver, CO. 1745
2016
164
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1746–1756, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Speech Act Modeling of Written Asynchronous Conversations with Task-Specific Embeddings and Conditional Structured Models Shafiq Joty and Enamul Hoque ALT Research Group Qatar Computing Research Institute — HBKU, Qatar Foundation {sjoty, mprince}@qf.org.qa Abstract This paper addresses the problem of speech act recognition in written asynchronous conversations (e.g., fora, emails). We propose a class of conditional structured models defined over arbitrary graph structures to capture the conversational dependencies between sentences. Our models use sentence representations encoded by a long short term memory (LSTM) recurrent neural model. Empirical evaluation shows the effectiveness of our approach over existing ones: (i) LSTMs provide better task-specific representations, and (ii) the global joint model improves over local models. 1 Introduction Asynchronous conversations, where participants communicate with each other at different times (e.g., fora, emails), have become very common for discussing events, issues, queries and life experiences. In doing so, participants interact with each other in complex ways, performing certain communicative acts like asking questions, requesting information or suggesting something. These are called speech acts (Austin, 1962). For example, consider the excerpt of a forum conversation from our corpus in Figure 1. The participant who posted the first comment C1, describes his situation by the first two sentences and then asks a question in the third sentence. Other participants respond to the query by suggesting something or asking for clarification. In this process, the participants get into a conversation by taking turns, each of which consists of one or more speech acts. The two-part structures across posts like ‘question-answer’ and ‘request-grant’ are called adjacency pairs (Schegloff, 1968). C1: My son wish to do his bachelor degree in Mechanical Engineering in an affordable Canadian university. Human: st, Local: st, Global: st The info. available in the net and the people who wish to offer services are too many and some are misleading. Human: st, Local: st, Global: st The preliminary preparations,eligibility,the require funds etc., are some of the issues which I wish to know from any panel members of this forum .. (truncated) Human: ques, Local: st, Global: st C3 (truncated)...take a list of canadian universities and then create a table and insert all the relevant information by reading each and every program info on the web. Human: sug, Local: sug, Global: sug Without doing a research my advice would be to apply to UVIC .. for the following reasons .. (truncated) Human: sug, Local: sug, Global: sug UBC is good too... but it is expensive particularly for international students due to tuition .. (truncated) Human: sug, Local: sug, Global: sug most of them accept on-line or email application. Human: st, Local: st, Global: st Good luck !! Human: pol, Local: pol, Global: pol C4 snakyy21: UVIC is a short form of? I have already started researching for my brother and found “College of North Atlantic” and .. (truncated) Human: ques, Local: st, Global: ques but not sure about the reputation.. Human: st, Local: res, Global: st C5 thank you for sharing useful tips will follow your advise. Human: pol, Local: pol, Global: pol Figure 1: Example conversation with Human annotations and automatic predictions by a Local classifier and a Global classifier. The labels st, ques, sug, and pol refers to Statement, Question, Suggestion, and Polite speech acts, respectively. Identification of speech acts is an important step towards deep conversation analysis in these media (Bangalore et al., 2006), and has been shown to be useful in many downstream applications including summarization (McKeown et al., 2007) and question answering (Hong and Davison, 2009). Previous attempts to automatic (sentence-level) 1746 speech act recognition in asynchronous conversation (Qadir and Riloff, 2011; Jeong et al., 2009; Tavafiet al., 2013; Oya and Carenini, 2014) suffer from at least one of the two major flaws. Firstly, they use bag-of-word (BOW) representation (e.g., unigram, bigram) to encode lexical information in a sentence. However, consider the suggestion sentences in the example. Arguably, a model needs to consider the structure (e.g., word order) and the compositionality of phrases to identify the right speech act. Furthermore, BOW representation could be quite sparse and may not generalize well when used in classification models. Secondly, existing approaches mostly disregard conversational dependencies between sentences. For instance, consider the example again, where we tag the sentences with the human annotations (‘Human’) and with the predictions of a local (‘Local’) classifier that considers word order for sentence representation but classifies each sentence separately. Prediction errors are underlined and highlighted in red. Notice the first and second sentences of comment 4, which are tagged mistakenly as statement and response, respectively, by our best local classifier. We hypothesize that some of the errors made by the local classifier could be corrected by employing a global joint model that performs a collective classification taking into account the conversational dependencies between sentences (e.g., adjacency relations). However, unlike synchronous conversations (e.g., phone, meeting), modeling conversational dependencies between sentences in asynchronous conversation is challenging, especially in those where explicit thread structure (reply-to relations) is missing, which is also our case. The conversational flow often lacks sequential dependencies in its temporal order. For example, if we arrange the sentences as they arrive in the conversation, it becomes hard to capture any dependency between the act types because the two components of the adjacency pairs can be far apart in the sequence. This leaves us with one open research question: how to model the dependencies between sentences in a single comment and between sentences across different comments? In this paper, we attempt to address this question by designing and experimenting with conditional structured models over arbitrary graph structure of the conversation. More concretely, we make the following contributions. Firstly, we propose to use Recurrent Neural Network (RNN) with Long Short Term Memory (LSTM) hidden layer to perform composition of phrases and to represent sentences using distributed condensed vectors (i.e., embeddings). We experiment with both unidirectional and bidirectional RNNs. Secondly, we propose conditional structured models in the form of pairwise Conditional Random Field (Murphy, 2012) over arbitrary conversational structures. We experiment with different variations of this model to capture different types of interactions between sentences inside the comments and across the comments. These models use the LSTM encoded vectors as feature vectors for performing the classification task jointly. As a secondary contribution, we also present and release a forum dataset annotated with a standard speech act tagset. We train our models on different settings using synchronous and asynchronous corpora, and evaluate on two forum datasets. Our main findings are: (i) LSTM RNNs provide better representation than BOW; (ii) Bidirectional LSTMs, which encode a sentence using two vectors provide better representation than the unidirectional ones; and (iii) Global joint models improve over local models given that it considers the right graph structure. The source code and the new dataset are available at http://alt.qcri. org/tools/speech-act/ 2 Our Approach Let sn m denote the m-th sentence of comment n in a conversation. Our framework works in two steps as demonstrated in Figure 2. First, we use a recurrent neural network (RNN) to compose sentence representations semantically from their words and to represent them with distributed condensed vectors zn m, i.e., sentence embeddings (Figure 2a). In the second step, a multivariate (graphical) model, which operates on the sentence embeddings, captures conversational dependencies between sentences in the conversation (Figure 2b). In the following, we describe the two steps in detail. 2.1 Sentence Representation One of our main hypotheses is that a sentence representation method should consider the word order of the sentence. To this end, we use an LSTM RNN (Hochreiter and Schmidhuber, 1997) to encode a sentence into a vector by processing its words sequentially, at each time step combining 1747 (a) Bidirectional LSTM-based RNN model (b) Fully connected CRF model Figure 2: Our two-step framework for speech act recognition in asynchronous conversation: (a) a bidirectional LSTM encodes each sentence sn m into a condensed vector zn m and classifies them separately; (b) a fully-connected CRF that takes the encoded vectors as input and performs joint learning and inference. the current input with the previous hidden state. Figure 4b demonstrates the process for three sentences. Each word in the vocabulary V is represented by a D dimensional vector in a shared lookup table L ∈R|V |×D. L is considered a model parameter to be learned. We can initialize L randomly or by pretrained word embedding vectors like word2vec (Mikolov et al., 2013a). Given an input sentence s = (w1, · · · , wT ), we first transform it into a feature sequence by mapping each token wt ∈s to an index in L. The lookup layer then creates an input vector xt ∈RD for each token wt. The input vectors are then passed to the LSTM recurrent layer, which computes a compositional representation −→ h t at every time step t by performing nonlinear transformations of the current input xt and the output of the previous time step −→ h t−1. Specifically, the recurrent layer in a LSTM RNN is constituted with hidden units called memory blocks. A memory block is composed of four elements: (i) a memory cell c (a neuron) with a self-connection, (ii) an input gate i to control the flow of input signal into the neuron, (iii) an output gate o to control the effect of the neuron activation on other neurons, and (iv) a forget gate f to allow the neuron to adaptively reset its current state through the self-connection. The following sequence of equations describe how the memory blocks are updated at every time step t: it = sigh(Uiht−1 + Vixt + bi) (1) ft = sigh(Ufht−1 + Vfxt + bf) (2) ct = it ⊙tanh(Ucht−1 + Vcxt) + ft ⊙ct−1 (3) ot = sigh(Uoht−1 + Voxt + bo) (4) ht = ot ⊙tanh(ct) (5) where Uk and Vk are the weight matrices between two consecutive hidden layers, and between the input and the hidden layers, respectively, which are associated with gate k (input, output, forget and cell); and and bk is the corresponding bias vector. The symbols sigh and tanh denote hard sigmoid and hard tan, respectively, and the symbol ⊙denotes a element-wise product of two vectors. LSTM by means of its specifically designed gates (as opposed to simple RNNs) is capable of capturing long range dependencies. We can interpret ht as an intermediate representation summarizing the past. The output of the last time step −→ hT = z thus represents the sentence, which can be fed to the output layer of the neural network (Fig. 4b) or to other models (e.g, a fully-connected CRF in Fig. 2b) for classification. The output layer of our LSTM-RNN uses a softmax for multiclass classification. Formally, the probability of k-th class for classification into K classes is p(y = k|s, θ) = exp (wT k z) PK k=1 exp (wT k z) (6) where w are the output layer weights. Bidirectionality The RNN described above encodes information that it gets only from the past. However, information from the future could also be crucial for recognizing speech acts. This is specially true for longer sentences, where a unidirectional LSTM can be limited in encoding the necessary information into a single vector. Bidirectional RNNs (Schuster and Paliwal, 1997) capture dependencies from both directions, thus provide two different views of the same sentence. This amounts to having a backward counterpart for each of the equations from 1 to 5. For classification, we use the concatenated vector [−→ hT , ←− hT ], 1748 where −→ hT and ←− hT are the encoded vectors summarizing the past and the future, respectively. 2.2 Conditional Structured Model Given the vector representation of the sentences in an asynchronous conversation, we explore two different approaches to learn classification functions. The first and the traditional approach is to learn a local classifier ignoring the structure in the output and to use it for predicting the label of each sentence separately. This is the approach we took above when we fed the output layer of the LSTM RNN with the sentence-level embeddings. However, this approach does not model the conversational dependency (e.g., adjacency relations between question-answer and request-accept pairs). The second approach, which we adopt in this paper, is to model the dependencies between the output variables (labels) while learning the classification functions jointly by optimizing a global performance criterion. We represent each conversation by a graph G=(V, E). Each node i∈V is associated with an input vector zi = zn m, representing the features of the sentence sn m, and an output variable yi∈{1, 2, · · · , K}, representing the class label. Similarly, each edge (i, j)∈E is associated with an input feature vector φ(zi, zj), derived from the node-level features, and an output variable yi,j∈{1, 2, · · · , L}, representing the state transitions for the pair of nodes. We define the following conditional joint distribution: p(y|v, w, z) = 1 Z(v, w, z) Y i∈V ψn(yi|z, v) Y (i,j)∈E ψe(yi,j|z, w) (7) where ψn and ψe are node and the edge factors, and Z(.) is the global normalization constant that ensures a valid probability distribution. We use a log-linear representation for the factors: ψn(yi|z, v) = exp(vT φ(yi, z)) (8) ψe(yi,j|z, w) = exp(wT φ(yi,j, z)) (9) where φ(.) is a feature vector derived from the inputs and the labels. This model is essentially a pairwise conditional random field or PCRF (Murphy, 2012). The global normalization allows CRFs to surmount the so-called label bias problem (Lafferty et al., 2001), allowing them to take longrange interactions into account. The log likelihood for one data point (z, y) (i.e., a conversation) is: f(θ) = X i∈V vT φ(yi, z) + X (i,j)∈E wT φ(yi,j, z) −log Z(v, w, z) (10) This objective is convex, so we can use gradientbased methods to find the global optimum. The gradients have the following form: f ′(v) = X i∈V φ(yi, z) −E[φ(yi, z)] (11) f ′(w) = X (i,j)∈E φ(yi,j, z) −E[φ(yi,j, z)] (12) where E[φ(.)] denote the expected feature vector. Training and Inference Traditionally, CRFs have been trained using offline methods like limited-memory BFGS (Murphy, 2012). Online training of CRFs using stochastic gradient descent (SGD) was proposed by Vishwanathan et al. (2006). Since RNNs are trained with online methods, to compare our two methods, we use SGD to train our CRFs. Algorithm 1 in the Appendix gives a pseudocode of the training procedure. We use Belief Propagation or BP (Pearl, 1988) for inference in our graphical models. BP is guaranteed to converge to an exact solution if the graph is a tree. However, exact inference is intractable for graphs with loops. Despite this, it has been advocated by Pearl (1988) to use BP in loopy graphs as an approximation; see also (Murphy, 2012), page 768. The algorithm is then called “loopy” BP, or LBP. Although LBP gives approximate solutions for general graphs, it often works well in practice (Murphy et al., 1999), outperforming other methods such as mean field (Weiss, 2001). Variations of Graph Structures One of the main advantages of our pairwise CRF is that we can define this model over arbitrary graph structures, which allows us to capture conversational dependencies at various levels. We distinguish between two types of dependencies: (i) intra-comment, which defines how the labels of the sentences in a comment are connected; and (ii) across-comment, which defines how the labels of the sentences across comments are connected. Table 1 summarizes the connection types that we have explored in our models. Each configuration of intra- and across- connections yields a different pairwise CRF model. Figure 3 shows four such CRFs with three comments — C1 being the first comment, and Ci and Cj being two other comments in the conversation. 1749 Tag Connection type Applicable to NO No connection between nodes intra & across LC Linear chain connection intra & across FC Fully connected intra & across FC1 Fully connected with first comment only across LC1 Linear chain with first comment only across Table 1: Connection types in CRF models. (a) NO-NO (MaxEnt) (b) LC-LC (c) LC-LC1 (d) LC-FC1 Figure 3: CRFs over different graph structures. Figure 3a shows the structure for NO-NO configuration, where there is no link between nodes of both intra- and across- comments. In this setting, the CRF model is equivalent to MaxEnt. Figure 3b shows the structure for LC-LC, where there are linear chain relations between nodes of both intra- and across- comments. The linear chain across comments refers to the structure, where the last sentence of each comment is connected to the first sentence of the comment that comes next in the temporal order (i.e., posting time). Figures 3c shows the CRF for LC-LC1, where sentences inside a comment have linear chain connections, and the last sentence of the first comment is connected to the first sentence of the other comments. Similarly, Figure 3d shows the graph structure for LC-FC1 configuration, where sentences inside comments have linear chain connections, and sentences of the first comment are fully connected with the sentences of the other comments. 3 Corpora There exist large corpora of utterances annotated with speech acts in synchronous spoken domains, e.g., Switchboard-DAMSL or SWBD (Jurafsky et al., 1997) and Meeting Recorder Dialog Act or MRDA (Dhillon et al., 2004). However, such large corpus does not exist in asynchronous domains. Some prior work (Cohen et al., 2004; Ravi and Kim, 2007; Feng et al., 2006; Bhatia et al., 2014) tackles the task at the comment level, and uses TA BC3 Total number of conv. 200 39 Avg. nb of comments per conv. 4.02 6.54 Avg. nb of sentences per conv. 18.56 34.15 Avg. nb of words per sentence 14.90 12.61 Table 2: Statistics about TA and BC3 corpora. Tag Description TA BC3 MRDA SU Suggestion 7.71% 5.48% 5.97% R Response 2.4% 3.75% 15.63% Q Question 14.71% 8.41% 8.62% P Polite 9.57% 8.63% 3.77% ST Statement 65.62% 73.72% 66.00% Table 3: Distribution of speech acts in our corpora. task-specific tagsets. In contrast, in this work we are interested in identifying speech acts at the sentence level, and also using a standard tagset like the ones defined in SWBD and MRDA. More recent studies attempt to solve the task at the sentence level. Jeong et al. (2009) first created a dataset of TripAdvisor (TA) forum conversations annotated with the standard 12 act types defined in MRDA. They also remapped the BC3 email corpus (Ulrich et al., 2008) according to this tagset. Table 10 in the Appendix presents the tags and their relative frequency in the two datasets. Subsequent studies (Joty et al., 2011; Tavafiet al., 2013; Oya and Carenini, 2014) use these datasets. We also use these datasets in our work. Table 2 shows some basic statistics about these datasets. On average, BC3 conversations are longer than TA in both number of comments and number of sentences. Since these datasets are relatively small in size, we group the 12 acts into 5 coarser classes to learn a reasonable classifier.1 More specifically, all the question types are grouped into one general class Question, all response types into Response, and appreciation and polite mechanisms into Polite class. Also since deep neural models like LSTM RNNs require a lot of training data, we also utilize the MRDA meeting corpus. Table 3 shows the label distribution of the resultant datasets. Statement is the most dominant class, followed by Question, Polite and Suggestion. QC3 Conversational Corpus Since both TA and BC3 are quite small to make a general comment about model performance in asynchronous 1Some prior work (Tavafiet al., 2013; Oya and Carenini, 2014) also took the same approach. 1750 Speech Act Distribution κ Suggestion 17.38% 0.86 Response 5.24% 0.43 Question 12.59% 0.87 Polite 6.13% 0.75 Statement 58.66% 0.78 Table 4: Corpus statistics for QC3. conversation, we have created a new dataset called Qatar Computing Conversational Corpus or QC3. We selected 50 conversations from a popular community question answering site named Qatar Living2 for our annotation. We used 3 conversations for our pilot study and used the remaining 47 for the actual study. The resultant corpus on average contains 13.32 comments and 33.28 sentences per conversation, and 19.78 words per sentence. Two native speakers of English annotated each conversation using a web-based annotation framework. They were asked to annotate each sentence with the most appropriate speech act tag from the list of 5 speech act types. Since this task is not always obvious, we gave them detailed annotation guidelines with real examples. We use Cohens Kappa κ to measure the agreement between the annotators. Table 4 presents the distribution of the speech acts and their respective κ values.After Statement, Suggestion is the most frequent class, followed by Question and Polite. The κ varies from 0.43 (for Response) to 0.87 (for Question). Finally, in order to create a consolidated dataset, we collected the disagreements and employed a third annotator to resolve those cases. 4 Experiments and Analysis In this section we present our experimental settings, results and analysis. We evaluate our models on the two forum corpora QC3 and TA. For performance comparison, we use both accuracy and macro-averaged F1 score. Accuracy gives the overall performance of a classifier but could be biased to most populated ones. Macro-averaged F1 weights equally every class and is not influenced by class imbalance. Statistical significance tests are done using an approximate randomization test based on the accuracy.3 We used SIGF V.2 (Pad´o, 2006) with 10,000 iterations. 2http://www.qatarliving.com/ 3Significance tests operate on individual instances rather than individual classes; thus not applicable for macro F1. Corpora Type Train Dev. Test QC3 asynchronous 1252 157 156 TA asynchronous 2968 372 371 BC3 asynchronous 1065 34 133 MRDA synchronous 50865 8366 10492 Total asyn. + sync. 56150 8929 11152 Table 5: Number of sentences in train, development and test sets for different datasets. Because of the noise and informal nature of conversational texts, we performed a series of preprocessing steps. We normalize all characters to their lower-cased forms, truncate elongations to two characters, spell out every digit and URL. We further tokenized the texts using the CMU TweetNLP tool (Gimpel et al., 2011). In the following, we first demonstrate the effectiveness of LSTM RNNs for learning representations of sentences automatically to identify their speech acts. Then in subsection 4.2, we show the usefulness of pairwise CRFs for capturing conversational dependencies in speech act recognition. 4.1 Effectiveness of LSTM RNNs To show the effectiveness of LSTMs for learning sentence representations, we split each of our asynchronous corpora randomly into 70% sentences for training, 10% for development, and 20% for testing. For MRDA, we use the same train-test-dev split as Jeong et al. (2009). Table 5 summarizes the resultant datasets. We compare the performance of LSTMs with that of MaxEnt (ME) and Multi-layer Perceptron (MLP) with one hidden layer.4 Both ME and MLP were fed with the bag-of-word (BOW) representations of the sentence, i.e., vectors containing binary values indicating the presence or absence of a word in the training set vocabulary. We train the models by optimizing the cross entropy using the gradient-based online learning algorithm ADAM (Kingma and Ba, 2014).5 The learning rate and other parameters were set to the values as suggested by the authors. To avoid overfitting, we use dropout (Srivastava et al., 2014) of hidden units and early stopping based on the loss on the development set.6 Maximum number of epochs was set to 25 for RNNs and 100 for ME and MLP. We experimented with {0.0, 0.2, 0.4} 4More hidden layers worsened the performance. 5Other algorithms (SGD, Adagrad) gave similar results. 6l1 and l2 regularization on weights did not work well. 1751 dropout rates, {16, 32, 64} minibatch sizes, and {100, 150, 200} hidden layer units in MLP and in LSTMs. The vocabulary (V ) in LSTMs was limited to the most frequent P% (P ∈{85, 90, 95}) words in the training corpus. We initialize the word vectors in the loop-up table L in one of two ways: (i) by sampling randomly from the small uniform distribution U(−0.05, 0.05), and (ii) by using pretrained 300 dimensional Google word embeddings from Mikolov et al. (2013b). The dimension for random initialization was set to 128. We experimented with four LSTM variations: (i) U-LSTMr, referring to unidirectional with random initialization; (ii) U-LSTMp, referring to unidirectional with pretrained initialization; (iii) BLSTMr, referring to bidirectional with random initialization; and (iv) B-LSTMp, referring to bidirectional with pretrained initialization. Table 6 shows the results for different models for the data splits in Table 5. The first two rows show the best results reported so far on the MRDA corpus from (Jeong et al., 2009) for classifying into 12 act types. The first row shows the results of the model that uses n-grams and the second row shows the results using all the features including speaker, part-of-speech, and dependency structure. Our LSTM RNNs and their n-gram model therefore use the same word sequence information. To compare our results with the state of the art, we ran our models on MRDA for both 5-class and 12-class classification tasks. The results are shown at the right most part of Table 6. Notice that all of our LSTMs achieve state of the art results and B-LSTMp achieves even significantly better with 99% confidence level. This is remarkable since our LSTMs learn the sentence representation automatically from the word sequence and do not use any hand-engineered features. Now consider the asynchronous domains QC3 and TA, where we show the results of our models based on 5-fold cross validation, in addition to the random (20%) testset. The 5-fold setting allows us to get more general performance of the models on a particular corpus. The comparison between our LSTMs shows that: (i) pretrained Google vectors provide better initialization to LSTMs than the random ones; (ii) bidirectional LSTMs outperform their unidirectional counterparts. When we compare these results with those of our baselines, the results are disappointing; the ME and MLP using BOW outperform LSTMs by a good margin. SU R Q P ST SU 34 0 1 0 27 R 0 4 0 2 12 Q 0 0 64 0 13 P 0 0 1 35 6 ST 8 1 3 4 311 (a) B-LSTMp SU R Q P ST SU 21 1 1 0 39 R 0 6 0 1 11 Q 0 0 63 0 14 P 0 0 1 32 9 ST 8 2 0 2 316 (b) MLP Figure 4: Confusion matrices for (a) B-LSTMp and (b) MLP on the testsets of QC3 and TA. However, this is not surprising since deep neural networks like LSTMs have a lot of parameters, for which they require a lot of data to learn from. To validate our claim, we create another training setting CAT by merging the training and development sets of the four corpora in Table 5 (see the Train and Dev. columns in the last row); the testset for each dataset however remains the same. Table 7 shows the results of the baselines and the B-LSTMp on the QC3 and TA testsets. In both datasets, B-LSTMp outperforms ME and MLP significantly. When we compare these results with those in Table 6, we notice that B-LSTMp, by virtue of its distributed and condensed representation, generalizes well across different domains. In contrast, ME and MLP, because of their BOW representation, suffer from data diversity of different domains. These results also confirm that BLSTMp gives better sentence representation than BOW, when it is given enough data. To analyze further the cases where B-LSTMp makes a difference, Figure 4 shows the corresponding confusion matrices for B-LSTMp and MLP on the concatenated testsets of QC3 and TA. It is noticeable that B-LSTMp is less affected by class imbalance and it can detect more suggestions than MLP. This indicates that LSTM RNNs can model the grammar of the sentence when composing the words into phrases sequentially. 4.2 Effectiveness of CRFs To demonstrate the effectiveness of CRFs for capturing inter-sentence dependencies in an asynchronous conversation, we create another dataset setting called CON, in which the random splits are done at the conversation (as opposed to sentence) level for the asynchronous corpora. This is required because our CRF models perform joint learning and inference based on a full conversation. As presented in Table 8, this setting contains 197 and 24 conversations for training and devel1752 QC3 TA MRDA Testset 5 folds Testset 5 folds 5 classes 12 classes Jeong et al. (ng) 57.53 (83.30) Jeong et al. (All) 59.04 (83.49) ME 55.12 (75.64) 50.23 (71.37) 61.4 (85.44) 59.23 (84.85) 65.25 (83.95) 57.79 (82.84) MLP 61.30 (74.36) 54.57 (71.63) 68.17 (85.98) 62.41 (85.02) 68.12 (84.24) 58.19 (83.24) U-LSTMr 51.57 (73.55) 48.64 (65.94) 56.54 (83.24) 56.39 (83.83) 71.29 (85.38) 58.72 (83.34) U-LSTMp 49.41 (70.97) 50.26 (65.62) 63.12(83.78) 59.10 (83.13) 72.32 (85.19) 59.05 (84.06) B-LSTMr 50.75 (72.26) 48.41 (66.19) 58.88 (82.97) 56.23 (83.34) 71.69 (85.62) 58.33 (83.49) B-LSTMp 53.22 (71.61) 51.59 (68.50) 60.73 (82.97) 59.68 (84.07) 72.02 (85.33) 60.12 (84.46*) Table 6: Macro-averaged F1 and raw accuracy (in parenthesis) for baselines and LSTM variants on the testset and 5-fold splits of different corpora. For MRDA, we use the same train-test-dev split as (Jeong et al., 2009). Accuracy significantly superior to state-of-the-art is marked with *. QC3 (Testset) TA (Testset) ME 50.64 (71.15) 72.49 (84.10) MLP 58.60 (74.36) 73.07 (86.29) B-LSTMp 66.40 (80.65*) 73.14 (87.01*) Table 7: Results on CAT dataset. Train Dev Test QC3 38 (1332) 4 (111) 5 (122) TA 160 (2957) 20 (310) 20 (444) Total 197 (4289) 24 (421) 25 (566) Table 8: Setting for CON dataset. The numbers inside parentheses indicate the number of sentences. opment, respectively.7 The testsets contain 5 and 20 conversations for QC3 and TA, respectively. As baselines, we use three models: (i) MEb, a MaxEnt using BOW representation; (ii) BLSTMp, which is now trained on the concatenated set of sentences from MRDA and CON training sets; and (iii) MEe, a MaxEnt using sentence embeddings extracted from the B-LSTMp, i.e., the sentence embeddings are used as feature vectors. We experiment with the CRF variants in Table 1. The CRFs are trained on the CON training set using the sentence embeddings that are extracted by applying the B-LSTMp model, as was done with MEe. Table 9 shows our results. We notice that CRFs generally outperform MEs in accuracy. This indicates that there are conversational dependencies between the sentences in a conversation. When we compare between CRF variants, we notice that the model that does not consider any link across comments perform the worst; see CRF (LC-NO). A simple linear chain connection between sentences in their temporal order does not 7We use the concatenated sets as train and dev. sets. QC3 TA MEb 56.67 (67.21) 63.29 (84.23) B-LSTMp 65.15 (77.87) 66.93 (85.13) MEe 59.94 (77.05) 59.55 (85.14) CRF (LC-NO) 62.20 (77.87) 60.30 (85.81) CRF (LC-LC) 62.35 (78.69) 60.30 (85.81) CRF (LC-LC1) 65.94 (80.33*) 61.58 (86.54) CRF (LC-FC1) 61.18 (77.87) 60.00 (85.36) CRF (FC-FC) 64.54 (79.51*) 61.64 (86.81*) Table 9: Results of CRFs on CON dataset. improve much (CRF (LC-LC)), which indicates that the widely used linear chain CRF (Lafferty et al., 2001) is not the most appropriate model for capturing conversational dependencies in these conversations. The CRF (LC-LC1) is one of the best performing models and perform significantly (with 99% confidence) better than B-LSTMp.8 This model considers linear chain connections between sentences inside comments and only to the first comment. Note that both QC3 and TA are forum sites, where participants in a conversation interact mostly with the person who posts the first comment asking for some information. This is interesting that our model can capture this aspect. Another interesting observation is that when we change the above model to consider relations with every sentence in the first comment (CRF (LCFC1)), this degrades the performance. This could be due to the fact that the information seeking person first explains her situation, and then asks for the information. Others tend to respond to the requested information rather than to her situation. The CRF (FC-FC) also yields as good results as CRF (LC-LC1). This could be attributed to the robustness of the fully-connected CRF, which learns 8Significance was computed on the concatenated testset. 1753 from all possible relations. To see some real examples in which CRF by means of its global learning and inference makes a difference, let us consider the example in Figure 1 again. We notice that the two sentences in comment C4 were mistakenly identified as Statement and Response, respectively, by the B-LSTMp local model. However, by considering these two sentences together with others in the conversation, the global CRF (FC-FC) model could correct them. 5 Related Work Three lines of research are related to our work: (i) semantic compositionality with LSTM RNNs, (ii) conditional structured models, and (iii) speech act recognition in asynchronous conversations. LSTM RNNs for composition Li et al. (2015) compare recurrent neural models with recursive (syntax-based) models for several NLP tasks and conclude that recurrent models perform on par with the recursive for most tasks (or even better). For example, recurrent models outperform recursive on sentence level sentiment classification. This finding motivated us to use recurrent models rather than recursive. The application of LSTM RNNs to speech act recognition is novel to the best of our knowledge. LSTM RNNs have also been applied to sequence tagging in opinion mining (Irsoy and Cardie, 2014; Liu et al., 2015). Conditional structured models There has been an explosion of interest in CRFs for solving structured output problems in NLP; see (Smith, 2011) for an overview. Linear chain (for sequence labeling) and tree structured CRFs (for parsing) are the common ones in NLP. However, speech act recognition in asynchronous conversation posits a different problem, where the challenge is to model arbitrary conversational structures. In this work we propose a general class of models based on pairwise CRFs that work on arbitrary graph structures. Speech act recognition in asynchronous conversation Jeong et al. (2009) use semi-supervised boosting to tag the sentences in email and forum discussions with speech acts by adapting knowledge from spoken conversations. Other sentencelevel approaches use supervised classifiers and sequence taggers (Qadir and Riloff, 2011; Tavafiet al., 2013; Oya and Carenini, 2014). Cohen et al. (2004) first use the term email speech act for classifying emails based on their acts (deliver, meeting). Their classifiers do not capture any contextual dependencies between the acts. To model contextual dependencies, Carvalho and Cohen (2005) use a collective classification approach with two different classifiers, one for content and one for context, in an iterative algorithm. Our approach is similar in spirit to their approach with three crucial differences: (i) our CRFs are globally normalized to surmount the label bias problem, where their classifiers are normalized locally; (ii) the graph structure of the conversation is given in their case, which is not the case with ours; and (iii) their approach works at the comment level, where we work at the sentence level. 6 Conclusions and Future Work We have presented a two-step framework for speech act recognition in asynchronous conversation. A LSTM RNN first composes sentences into vector representations by considering the word order. Then a pairwise CRF jointly models the intersentence dependencies in the conversation. We experimented with different LSTM variants (uni- vs. bi-directional, random vs. pretrained initialization), and different CRF variants depending on the underlying graph structure. We trained our models on many different settings using synchronous and asynchronous corpora and evaluated on two forum datasets, one of which is presented in this work. Our results show that LSTM RNNs provide better representations but requires more data, and global joint models improve over local models given that it considers the right graph structure. In the future, we would like to combine CRFs with LSTMs for doing the two steps jointly, so that the LSTMs can learn the embeddings using the global thread-level feedback. This would require the backpropagation algorithm to take error signals from the loopy BP inference. We would also like to apply our models to conversations, where the graph structure is extractable using the meta data or other clues, e.g., the fragment quotation graphs for email threads (Carenini et al., 2008). Acknowledgments We thank Aseel Ghazal for her effort in creating the QC3 corpus. This work is part of the Interactive sYstems for Answer Search (IYAS) project. 1754 References John Langshaw Austin. 1962. How to do things with words. Harvard University Press. Srinivas Bangalore, Giuseppe Di Fabbrizio, and Amanda Stent. 2006. Learning the structure of taskdriven human-human dialogs. In Proceedings of the 44nd Annual Meeting on Association for Computational Linguistics, ACL’06, pages 201–208. ACL. Sumit Bhatia, Prakhar Biyani, and Prasenjit Mitra. 2014. Summarizing online forum discussions – can dialog acts of individual messages help? In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2127–2131, Doha, Qatar, October. ACL. Giuseppe Carenini, Raymond T. Ng, and Xiaodong Zhou. 2008. Summarizing emails with conversational cohesion and subjectivity. In Proceedings of the 46nd Annual Meeting on Association for Computational Linguistics, ACL’08, pages 353–361, OH. ACL. Vitor R. Carvalho and William W. Cohen. 2005. On the collective classification of email ”speech acts”. In SIGIR ’05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 345– 352, New York, NY, USA. ACM Press. William W. Cohen, Vitor R. Carvalho, and Tom M. Mitchell. 2004. Learning to classify email into “speech acts”. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 309–316. Rajdip Dhillon, Sonali Bhagat, Hannah Carvey, and Elizabeth Shriberg. 2004. Meeting Recorder Project: Dialog Act Labeling Guide. Technical report, ICSI Tech. Report. Donghui Feng, Erin Shaw, Jihie Kim, and Eduard Hovy. 2006. Learning to detect conversation focus of threaded discussions. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL ’06, pages 208–215, Stroudsburg, PA, USA. Association for Computational Linguistics. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papersVolume 2, pages 42–47. ACL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Liangjie Hong and Brian D. Davison. 2009. A classification-based approach to question answering in discussion boards. In 32nd Annual International ACM SIGIR Conference on Research and Development on Information Retrieval, pages 171–178, Boston, USA. ACM Press. Ozan Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 720–728, Doha, Qatar. ACL. Minwoo Jeong, Chin-Yew Lin, and Gary Geunbae Lee. 2009. Semi-supervised speech act recognition in emails and forums. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1250–1259, Singapore. ACL. Shafiq Joty, Giuseppe Carenini, and Chin-Yew Lin. 2011. Unsupervised Modeling of Dialog Acts in Asynchronous Conversations. In Proceedings of the twenty second International Joint Conference on Artificial Intelligence, IJCAI’11, pages 1–130, Barcelona. Dan Jurafsky, Liz Shriberg, and Debra Biasca. 1997. Switchboard SWBD-DAMSL Shallow-DiscourseFunction Annotation Coders Manual, Draft 13. Technical report, University of Colorado at Boulder & +SRI International. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2304– 2314, Lisbon, Portugal, September. ACL. Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Finegrained opinion mining with recurrent neural networks and word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1433–1443, Lisbon, Portugal, September. ACL. Kathleen McKeown, Lokesh Shrestha, and Owen Rambow. 2007. Using question-answer pairs in extractive summarization of email conversations. In CICLing, pages 542–550. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 1755 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. 1999. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI’99, pages 467–475, Stockholm, Sweden. Morgan Kaufmann Publishers Inc. Kevin Murphy. 2012. Machine Learning A Probabilistic Perspective. The MIT Press. Tatsuro Oya and Giuseppe Carenini. 2014. Extractive summarization and dialogue act modeling on email threads: An integrated probabilistic approach. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), page 133–140, Philadelphia, PA, U.S.A. ACL. Sebastian Pad´o, 2006. User’s guide to sigf: Significance testing by approximate randomisation. Judea Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. Ashequl Qadir and Ellen Riloff. 2011. Classifying sentences as speech acts in message board posts. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 748–758, Edinburgh, Scotland, UK. Association for Computational Linguistics. Sujith Ravi and Jihie Kim. 2007. Profiling Student Interactions in Threaded Discussions with Speech Act Classifiers. In Proceedings of AI in Education Conference (AIED 2007). Emanuel A. Schegloff. 1968. Sequencing in conversational openings. American Anthropologist, 70(6):1075–1095. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Noah A. Smith. 2011. Linguistic Structure Prediction. Synthesis Lectures on Human Language Technologies. Morgan and Claypool, May. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Maryam Tavafi, Yashar Mehdad, Shafiq Joty, Giuseppe Carenini, and Raymond Ng. 2013. Dialogue act recognition in synchronous and asynchronous conversations. In Proceedings of the 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), page 117–121, Metz, France. ACL. Jan Ulrich, Gabriel Murray, and Giuseppe Carenini. 2008. A publicly available annotated corpus for supervised email summarization. In AAAI’08 EMAIL Workshop, Chicago, USA. AAAI. S. V. N. Vishwanathan, Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. Accelerated training of conditional random fields with stochastic gradient methods. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 969–976, Pittsburgh, USA. ACM. Yair Weiss. 2001. Comparing the mean field method and belief propaga- tion for approximate inference in mrfs. In Advanced Mean Field Methods. MIT Press. A Appendix Algorithm 1: Online learning algorithm for conditional random fields 1. Initialize the model parameters v and w; 2. repeat for each thread G = (V, E) do a. Compute node and edge factors ψn(yi|z, v) and ψe(yi,j|z, w); b. Infer node and edge marginals using sum-product loopy BP; c. Update: v = v −η 1 |V |f′(v); d. Update: w = w −η 1 |E|f′(w) ; end until convergence; Tag Description BC3 TA S Statement 69.56% 65.62% P Polite mechanism 6.97% 9.11% QY Yes-no question 6.75% 8.33% AM Action motivator 6.09% 7.71% QW Wh-question 2.29% 4.23% A Accept response 2.07% 1.10% QO Open-ended question 1.32% 0.92% AA Acknowledge and appreciate 1.24% 0.46% QR Or/or-clause question 1.10% 1.16% R Reject response 1.06% 0.64% U Uncertain response 0.79% 0.65% QH Rhetorical question 0.75% 0.08% Table 10: Dialog act tags and their relative frequencies in the BC3 and TA corpora. 1756
2016
165
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1757–1768, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Situation entity types: automatic classification of clause-level aspect Annemarie Friedrich1 Alexis Palmer2 Manfred Pinkal1 1Department of Computational Linguistics, Saarland University, Germany 2Leibniz ScienceCampus, Dept. of Computational Linguistics, Heidelberg University, Germany {afried,pinkal}@coli.uni-saarland.de [email protected] Abstract This paper describes the first robust approach to automatically labeling clauses with their situation entity type (Smith, 2003), capturing aspectual phenomena at the clause level which are relevant for interpreting both semantics at the clause level and discourse structure. Previous work on this task used a small data set from a limited domain, and relied mainly on words as features, an approach which is impractical in larger settings. We provide a new corpus of texts from 13 genres (40,000 clauses) annotated with situation entity types. We show that our sequence labeling approach using distributional information in the form of Brown clusters, as well as syntactic-semantic features targeted to the task, is robust across genres, reaching accuracies of up to 76%. 1 Introduction Clauses in text have different aspectual properties, and thus contribute to the discourse in different ways. Distinctions that have been made in the linguistic and semantic theory literature include the classification of states, events and processes (Vendler, 1957; Bach, 1986), and whether clauses introduce particular eventualities or report regularities generalizing either over events or members of a kind (Krifka et al., 1995). Such aspectual distinctions are relevant to natural language processing tasks requiring text understanding such as information extraction (Van Durme, 2010) or temporal processing (Costa and Branco, 2012). In this paper, we are concerned with automatically identifying the type of a situation entity (SE), which we assume to be expressed by a clause. Specifically, we present a system for automatically labeling clauses using the inventory of STATE: The colonel owns the farm. EVENT: John won the race. REPORT: “...”, said Obama. GENERIC SENTENCE: Generalizations over kinds. The lion has a bushy tail. GENERALIZING SENTENCE: Generalizations over events (habituals). Mary often fed the cat last year. QUESTION: Who wants to come? IMPERATIVE: Hand me the pen! Figure 1: SE types, adapted from Smith (2003). SE types shown in Figure 1 (Smith, 2003, 2005; Palmer et al., 2007). The original motivation for the above inventory of SE types is the observation that different modes of discourse, a classification of linguistic properties of text at the passage level, have different distributions of SE types. For example, EVENTs and STATEs are predominant in narrative passages, while GENERIC SENTENCEs occur frequently in information passages. A previous approach to automatically labeling SE types (Palmer et al., 2007) – referred to here as UT07 – captures interesting insights, but is trained and evaluated on a relatively small amount of text (about 4300 clauses), mainly from one rather specialized subsection of the Brown corpus. The data shows a highly skewed distribution of SE types and was annotated in an intuitive fashion with only moderate agreement. In addition, the UT07 system relies mostly on part-of-speech tags and words as features. The latter are impractical when dealing with larger data sets and capture most of the corpus vocabulary, overfitting the model to the data set. Despite this overfitting, the system’s accuracy is only around 53%. We address these shortcomings, developing a robust system that delivers high performance compared to the human upper bound across a range of genres. Our approach uses features which increase 1757 robustness: Brown clusters and syntactic-semantic features. Our models for labeling texts with the aspectual properties of clauses in the form of SE types reach accuracies of up to 76%. In an oracle experiment, Palmer et al. (2007) show that including the gold labels of the previous clauses as features into their maximum entropy model is beneficial. We implement the first true sequence labeling model for SE types, using conditional random fields to find the globallybest sequence of labels for the clauses in a document. Performance increases by around 2% absolute compared to predicting labels for clauses separately; much of this effect stems from the fact that GENERIC SENTENCEs often occur together. Moving well beyond the single-domain setting, our models are trained and evaluated on a multigenre corpus of approximately 40,000 clauses from MASC (Ide et al., 2008) and Wikipedia which have been annotated with substantial agreement. We train and test our models both within genres and across genres, highlighting differences between genres and creating models that are robust across genres. Both the corpus and the code for an SE type labeler are freely available.1 These form the basis for future research on SE types and related aspectual phenomena and will enable the inclusion of SE type information as a preprocessing step into various NLP tasks. 2 Linguistic background The inventory of SE types proposed by Smith (2003) consists of three high-level categories, each with two subtypes. Eventualities comprise EVENT and STATE, categories for clauses representing actual happenings, states of the world, or attributes of entities or situations. General Statives include GENERIC SENTENCE and GENERALIZING SENTENCE and reflect regularities in the world or general information predicated over classes or kinds. Finally, Abstract Entities (Figure 2) have the subtypes FACT and PROPOSITION. Although Abstract Entities are part of the label inventory for UT07, we treat them in a separate identification step, for reasons discussed in Section 7. The inventory was expanded by Palmer et al. (2007) to include three additional types: REPORT, QUESTION and IMPERATIVE. The latter two categories were added to accommodate exhaustive annota1Corpora, annotation manual and code available at www.coli.uni-saarland.de/projects/sitent FACT: Objects of knowledge. I know that Mary refused the offer. PROPOSITION: Objects of belief. I believe that Mary refused the offer. Figure 2: Abstract Entity SE types. tion of text; REPORT is a subtype of event for attributions of quoted speech. Two parts of a clause provide important information for determining the SE type (Friedrich and Palmer, 2014b): a clause’s main verb and its main referent. The latter is loosely defined as the main entity that the segment is about; in English this is usually the subject. For example, main referents of GENERIC SENTENCEs are kinds or classes as in “Elephants are huge”, while the main referents of Eventualities and GENERALIZING SENTENCEs are particular individuals (“John is short”). For English, the main verb is the non-auxiliary verb ranked highest in the dependency parse (e.g. “kiss” in “John has kissed Joe”). STATEs and EVENTs differ in the fundamental lexical aspectual class (Siegel and McKeown, 2000) of their main verbs (e.g. dynamic in “She filled the glass with water” vs. stative in “Water fills the glass”). While fundamental lexical aspectual class is a word-sense level attribute of the clause’s main verb, habituality is a property of the entire clause which is helpful to determine the clause’s SE type. For example, EVENT and GENERALIZING SENTENCE differ in habituality (e.g. episodic in “John cycled to work yesterday” vs. habitual in “John cycles to work”). Like habituality, SE types are a categorization at the clause level. Properties of the clause such as modals, negation, or the perfect influence the SE type: for instance, “John might win” is treated as a STATE as it describes a possible state of the world rather than an EVENT. Such coercions happen only for clauses which, without the trigger for aspectual shift, would be EVENTs; other SEs retain their type even under coercions such as negation, e.g., “Elephants are not small” is a GENERIC SENTENCE. SE types aim to capture how clauses behave in discourse, and the types STATE and EVENT are aspectual rather than ontological categories. The types reflect not so much semantic content of a clause as its manner of presentation, and all parts of a clause contribute to determining its SE type. 1758 3 Related work SE types model aspect at the clause level; thus they are most closely related to other works performing automatic classification for various aspect-related phenomena of the verb or the clause. For example, Vendler classes (Vendler, 1957) ascribe four categories as lexical properties of verbs, distinguishing states from three types of events (accomplishment, achievement, and activity), differing according to temporal and aspectual properties (e.g. telicity and punctuality). The work of Siegel and McKeown (2000) is a major inspiration in computational work on modeling these linguistic phenomena, introducing the use of linguistic indicators (see Section 5.1). Hermes et al. (2015) model Vendler classes computationally on a verbtype level for 95 different German verbs, combining distributional vectors with supervised classification. Zarcone and Lenci (2008) investigate both supervised and unsupervised classification frameworks for occurrences of 28 Italian verbs, and Friedrich and Palmer (2014a) predict lexical aspectual class for English verbs in context. The only previous approach to automatic classification of SE types comes from Palmer et al. (2007). This system (UT07) uses word and POS tag features as well as a number of lexical features adopted from theoretical work on aspectual classification. The model is described in Section 6.1. Another related body of work has to do with determining event class as a precursor to temporal relation classification. The inventory of event classes, described in detail in the TimeML annotation guidelines (Saur´ı et al., 2006), combines semantic (REPORTING, PERCEPTION), aspectual (ASPECTUAL, STATE, OCCURRENCE), and intensional (I ACTION, I STATE) properties of events. Finally, there are close connections to systems which predict genericity of noun phrases (Reiter and Frank, 2010; Friedrich and Pinkal, 2015a), and habituality of clauses (Mathew and Katz, 2009; Friedrich and Pinkal, 2015b). 4 Data sets The experiments presented in this paper make use of two data sets labeled with SE types. Brown data. This data set consists of 20 texts from the popular lore section of the Brown corpus (Francis and Kuˇcera, 1979), manually segmented into 4391 clauses and marked by two annotators in corpus tokens SEs Fleiss’ κ MASC 357078 30333 0.69 Wikipedia 148040 10607 0.66 Table 1: SE-labeled corpora: size and agreement. SE type MASC Wiki Fleiss κ* STATE 49.8 24.3 0.67 EVENT 24.3 18.9 0.74 REPORT 4.8 0.9 0.80 GENERIC 7.3 49.7 0.68 GENERALIZING 3.8 2.5 0.43 QUESTION 3.3 0.1 0.91 IMPERATIVE 3.2 0.2 0.94 undecided 2.4 2.1 Table 2: Distribution of SE types in gold standard (%). *Krippendorff’s diagnostics. an intuitive way with κ=0.52 (Cohen, 1960). Final labels were created via adjudication. The texts are essays and personal stories with topics ranging from maritime stories to marriage advice. MASC and Wikipedia. Our main data set consists of documents from MASC (Ide et al., 2010) and Wikipedia. The MASC data covers 12 of the written genres (see Table 11). Texts are split into clauses using SPADE (Soricut and Marcu, 2003) with some heuristic post-processing, and the clauses are labeled by three annotators independently. Annotators, all student assistants with basic linguistic training, were given an extensive annotation manual. Table 1 reports agreement over types in terms of Fleiss’ κ (Fleiss, 1971). As we do not force annotators to assign a label to each clause, we compute κ using all pairings of labels where both annotators assigned an SE type. The gold standard is constructed via majority voting. Table 2 shows the distribution of SE types. The largest proportion of segments in MASC are STATEs, while the largest proportion in Wikipedia are GENERIC SENTENCEs. The Wikipedia data was collected to supplement MASC, which contains few generics and no data from an encyclopedic genre. Within MASC, the various genres’ distributions of SE types differ as well, and agreement scores also vary: some genres contain many instances of easily classifiable SE types, while others (e.g., essays or journal) are more difficult to annotate (more details in Section 6.6). The rightmost column of Table 2 shows the values for Krippendorff’s diagnostics (Krippendorff, 1980), a tool for determining which categories hu1759 group explanation examples mv features describing the SE’s main verb & its arguments tense, lemma, lemma of object, auxiliary, WordNet sense and hypernym sense, progressive, POS, perfect, particle, voice, linguistic indicators mr features describing the main referent, i.e., the NP denoting the main verb’s subject lemma, determiner type, noun type, number, WordNet sense and supersense, dependency relations linked to this token, person, countability, bare plural cl features describing entire clause that invokes the SE presence of adverbs / prepositional clauses, conditional, modal, whether subject before verb, negated, verbs embedding the clause Table 3: Overview of feature set B. The full and detailed list is available (together with the implementation) at http://www.coli.uni-saarland.de/projects/sitent. mans had most difficulties with. For each category, one computes κ for an artificial set-up in which all categories except one are collapsed into an artificial OTHER category. A high value indicates that annotators can distinguish this SE type well from others. GENERALIZING SENTENCEs are most difficult to agree upon. For all frequently occurring types as well as QUESTIONs and IMPERATIVEs, agreement is substantial. Agreement on QUESTION and IMPERATIVEs is not perfect even for humans, as identifying them requires recognizing cases in reported speech, which is not a trivial task (e.g., Brunner, 2013). To illustrate another difficult case, consider the example “You must never confuse faith”, which was marked as both IMPERATIVE and GENERIC SENTENCE, by different annotators. 5 Method This section describes the feature sets and classification methods used in our approach, which models SE type labeling as a supervised sequence labeling task. 5.1 Feature sets Our feature sets are designed to work well on large data sets, across genres and domains. Features are grouped into two sets: A consists of standard NLP features including POS tags and Brown clusters. Set B targets SE labeling, focusing on syntacticsemantic properties of the main verb and main referent, as well as properties of the clause which indicate its aspectual nature. Texts are pre-processed with Stanford CoreNLP (Manning et al., 2014), including tokenization, POS tagging (Toutanova et al., 2003) and dependency parsing (Klein and Manning, 2002) using the UIMA-based DKPro framework (Ferrucci and Lally, 2004; Eckart de Castilho and Gurevych, 2014). A-pos: part-of-speech tags. These features count how often each POS tag occurs in a clause. A-bc: Brown cluster features. UT07 relies mostly on words and word/POS tag pairs. These simple features work well on the small Brown data set, but the approach quickly becomes impractical with increasing corpus size. We instead turn to distributional information in the form of Brown clusters (Brown et al., 1992), which can be learned from raw text and represent word classes in a hierarchical way. Originally developed in the context of n-gram language modeling, they aim to assign words to classes such that the average mutual information of the words in the clusters is maximized. We use existing, freely-available clusters trained on news data by Turian et al. (2010) using the implementation by Liang (2005).2 Clusterings with 320 and 1000 Brown clusters work best for our task. We use one feature per cluster, counting how often a word in the clause was assigned to this cluster (0 for most clusters). B-mv: main verb. Using dependency parses, we extract the verb ranked highest in the clause’s parse as the main verb, and extract the set of features listed in Table 3 for that token. Features based on WordNet (Fellbaum, 1998) use the most frequent sense of the lemma. Tense and voice information is extracted from sequences of POS tags using a set of rules (Loaiciga et al., 2014). Linguistic indicators (Siegel and McKeown, 2000) are features collected per verb type over a large parsed background corpus, encoding how often a verb type occurred with each linguistic marker, e.g., in past tense or with an in-PP. We use values collected from Gigaword (Graff et al., 2003); these are freely available at our project web site (Friedrich and Palmer, 2014a). B-mr: main referent. We extract the grammatical subject of the main verb (i.e., nsubj or nsubjpass) as the clause’s main referent. While the main verb must occur within the clause, the 2http://metaoptimize.com/projects/ wordreprs 1760 main referent may be a token either within or outside the clause. In the latter case, it still functions as the clause’s main referent, as in most cases it can be considered an implicit argument within the clause. Table 3 lists the features extracted for the main referent. B-cl: clause. These features (see also Table 3) describe properties at the clause level, capturing both grammatical phenomena such as word order and lexical phenomena including presence of particular adverbials or prepositional phrases, as well as semantic information such as modality. If the clause’s main verb is embedded in a ccomp relation, we also use features describing the respective governing verb. 5.2 Classification / sequence labeling model Our core modeling assumption is to view a document as a sequence of SE type labels, each associated with a clause; this motivates the choice of using a conditional random field (CRF, Lafferty et al. (2001)) for label prediction. The conditional probability of label sequence ⃗y given an observation sequence ⃗x is given by: P(⃗y|⃗x) = 1 Z(⃗x)exp( n X j=1 m X i=1 λifi(yj−1, yj, ⃗x, j)), with Z(⃗x) being a normalization constant (see also Klinger and Tomanek (2007)). λi, the weights of the feature functions, are learned via L-BGFS (Wright and Nocedal, 1999). We create a linear chain CRF using the CRF++ toolkit3 with default parameters, applying two forms of feature functions: fi(yj, xj) and fi(yj−1, yj). The former consists of indicator functions for combinations of SE type labels and each of the features listed above. The latter type of feature function (also called “bigram” features in CRF++ terminology) gets instantiated as indicator functions for each combination of labels, thereby enabling the model to take sequence information into account. When using only the former type of feature function, our classifier is equivalent to a maximum entropy (MaxEnt) model. Side remark: pipeline approach. Feature set B is inspired by previous work on two subtasks of assigning an SE type to a clause (see Section 3): (a) identifying the genericity of a noun phrase in its clausal context, and (b) identifying whether a clause is episodic, habitual or static. This informa3https://code.google.com/p/crfpp tion can in turn be used to determine a clause’s SE type label in a rule-based way, e.g., GENERALIZING SENTENCEs are habitual clauses with a nongeneric main referent. As our corpus is also annotated with this information, we also trained separate models for these subtasks and assigned the SE type label accordingly. However, such a pipeline approach is not competitive with the model trained directly on SE types (see Section 6.3). 6 Experiments Here we present our experiments on SE type classification, beginning with a (near) replication of the UT07 system, and moving on to evaluate our new approach from multiple perspectives. 6.1 Replication and extension of UT07 As a first step, we implement a system similar to UT07, which relies on the features summarized in Table 4. For W and T features, we set a frequency threshold of 7 occurrences. Feature set L comprises sets of predicates assumed to correlate with particular SE types, and whether or not the clause contains a modal or finite verb. Set G includes all verbs of the clause and their POS-tags. UT07 additionally uses CCG supertags and grammatical function information. The UT07 system approximates a sequence labeling model by adding the predicted labels of previous clauses as lookback (LB) features. To parallel their experiments, we train both MaxEnt and CRF models, as explained in Section 5.2. Results on the Brown data, with the same training/test split, appear in Table 5. Unlike the LB model, our CRF predicts the label sequence jointly and outperforms UT07 on the Brown data by up to 7% accuracy. We assume that the performance boost in the MaxEnt setting is at least partially due to having better parses. In sum, on the small Brown data set, a CRF approach successfully leverages sequence information, and a simple set of features works well. Preliminary experiments applying our new features on Brown data yield no improvements, suggesting that word-based features overfit this domain. W words T POS tags, word/POS tag combinations L linguistic cues G grammatical cues Table 4: Features used in baseline UT07. 1761 Palmer et al. (2007) our implementation features MaxEnt LB MaxEnt CRF W 45.4 46.6 48.8 47.0 WT 49.9 52.4 52.9 53.7 WTL 48.9 50.5 51.6 55.8 WTLG 50.6 53.1 55.8 60.0 Table 5: Accuracy on Brown. Test set majority class (STATE) is 35.3%. LB = results for best lookback settings in MaxEnt. 787 test instances. 6.2 Experimental settings We develop our models using 10-fold cross validation (CV) on 80% (counted in terms of the number of SEs) of the MASC and Wikipedia data (a total of 32,855 annotated SEs), keeping the remaining 20% as a held-out test set. Development and test sets each contain distinct sets of documents; the documents of each MASC genre and of Wikipedia are distributed over the folds. We report results in terms of macro-average precision, macro-average recall, macro-average F1-measure (harmonic mean of macro-average precision and macro-average recall), and accuracy. We apply McNemar’s test (McNemar, 1947) with p < 0.01 to test significance of differences in accuracy. In the following tables, we mark numerically-close scores with the same symbols if they are found to be significantly different. Upper bound: human performance. Labeling clauses with their SE types is a non-trivial task even for humans, as there are many borderline cases (see Sections 4 and 8). We compute an upper bound for system performance by iterating over all clauses: for each pair of human annotators, two entries are added to a co-occurrence matrix (similar to a confusion matrix), with each label serving once as “gold standard” and once as the “prediction.” From this matrix, we can compute scores in the same manner as for system predictions. Precision and recall scores are symmetric in this case, and accuracy corresponds to observed agreement. 6.3 Impact of feature sets We now compare various configurations of our CRF-based SE type labeler, experimenting with the feature sets as described in Section 5.1. Table 6 shows the results for 10-fold CV on the dev portion of the MASC+Wiki corpus. Each feature set on its own outperforms the majority class baseline. Of the individual feature groups, bc and mv have the highest predicfeature set P R F acc. maj. class (STATE) 6.4 14.3 8.8 45.0 A 70.1 61.4 65.4 ∗†72.1 pos 49.3 40.3 44.3 58.7 bc 67.5 55.8 61.1 ∗70.6 B 69.5 62.7 66.9 ⋆‡72.8 mr 36.4 26.8 30.9 51.7 mv 62.3 52.4 56.9 ⋆70.8 cl 53.3 41.2 46.6 52.8 A+B 74.1 68.6 71.2 ‡†76.4 upper bound (humans) 78.6 78.6 78.6 79.6 Table 6: Impact of different feature sets. Wiki+MASC dev set, CRF, 10-fold CV. feature set P R F acc. maj. class (STATE) 6.4 14.3 8.8 44.7 A 67.6 60.6 63.9 ∗69.8 B 69.9 61.7 65.5 †71.4 A+B 73.4 65.5 69.3 ∗†74.7 Table 7: Results on MASC+Wiki held-out test set (7937 test instances). tive power; both capture lexical information of the main verb. Using sets A and B individually results in similar scores; their combination increases accuracy on the dev set by an absolute 3.6-4.3%. Within A and B, each subgroup contributes to the increase in performance (not shown in table). Finally, having developed exclusively on the dev set, we run the system on the held-out test set, training on the entire dev set. Results (in Table 7) show the same tendencies as for the dev set: each feature set contributes to the final score, and the syntactic-semantic features targeted at classifying SE types (i.e. B) are helpful. Ablation. To gain further insight, we ablate each feature subgroup from the full system, see Table 8. Again, bc features and mv features are identified as the most important ones. The other feature groups partially carry redundant information when combining A and B. Next, we rank features by their information gain with respect to the SE types. In Table 3, the features of each group are ordered by this analysis. Ablating single features from the full system does not result in significant performance losses. However, using only selected, top features for our system decreased performance, possibly because some features cover rare but important cases, and because the feature selection algorithm does not take into account the information features may provide regarding tran1762 feature set P R F acc. A+B 74.1 68.6 71.2 76.4 - bc 71.3 65.7 68.4 74.5 - pos 73.4 67.4 70.2 76.0 - mr 73.7 67.4 70.4 75.9 - mv 72.3 64.2 68.0 73.6 - cl 73.1 67.6 70.2 76.0 Table 8: Impact of feature groups: ablation Wiki+MASC dev set, CRF, 10-fold CV. All accuracies for ablation settings are significantly different from A+B. sitions (Klinger and Friedrich, 2009). In addition, CRFs are known to be able to deal with a large number of potentially dependent features. Side remark: pipeline approach. We here use the subset of SEs labeled as EVENT, STATE, GENERIC SENTENCE or GENERALIZING SENTENCE because noun phrase genericity and habituality is not labeled for IMPERATIVE and QUESTION, and REPORT is identified lexically based on the main verb rather than these semantic features. Models for subtasks of SE type identification, i.e., (a) genericity of noun phrases and (b) habituality reach accuracies of (a) 86.8% and (b) 83.6% (on the relevant subset). Applying the labels output by these two systems as (the only) features in a rule-based way using a J48 decision tree (Witten et al., 1999) results in an accuracy of 75.5%, which is lower than 77.2%, the accuracy of the CRF which directly models SE types (when using only the above four types). 6.4 Impact of amount of training data Next we test how much training data is required to get stable results for SE type classification. Figure 3 shows accuracy and F1 for 10-fold CV using A+B, with training data downsampled to different extents in each run by randomly removing documents. Up to the setting which uses about 60% of the training data, performance increases steadily. Afterwards, the curves start to level off. We conclude that robust models can be learned from our corpus. Adding training data, especially in-domain data, will, as always, be beneficial. 6.5 Impact of sequence labeling approach Palmer et al. (2007) suggest that SE types of nearby clauses are a useful source of information. We further test this hypothesis by comparing our sequence labeling model (CRF) to two additional 0 20 40 60 80 100 40 50 60 70 80 G G G G G G G G G G G G G G G G 0 20 40 60 80 100 40 50 60 70 80 G Accuracy F1 % training data Figure 3: Learning curve for MASC+Wiki dev. models: (1) a MaxEnt model, which labels clauses in isolation, and (2) a MaxEnt model including the correct label of the preceding clause (seq-oracle), simulating an upper bound for the impact of sequence information on our system. Table 9 shows the results. Scores for GENERALIZING SENTENCE are the lowest as this class is very infrequent in the data set. The most striking improvement of the two sequence labeling settings over MaxEnt concerns the identification of GENERIC SENTENCEs. These often “cluster” in texts (Friedrich and Pinkal, 2015b) and hence their identification profits from using sequence information. The results for seq-oracle show that the sequence information is useful for STATE, GENERIC and GENERALIZING SENTENCEs, but that no further improvement is to be expected from this method for the other SE types. We conclude that the CRF model is to be preferred over the MaxEnt model; in almost all of our experiments it performs significantly better or equally well. SE type MaxEnt CRF seq-oracle STATE 79.1 80.6 81.7 EVENT 77.5 78.6 78.3 REPORT 78.2 78.9 78.3 GENERIC 61.3 68.3 73.5 GENERALIZING 25.0 29.4 38.1 IMPERATIVE 72.3 75.3 74.7 QUESTION 84.4 84.4 83.8 macro-avg P 71.5 74.1 75.5 macro-avg R 66.1 68.6 70.4 macro-avg F1 68.7 71.2 73.9 accuracy ∗74.1 ∗†76.4 †77.9 Table 9: Impact of sequence information: (F1 by SE type): CRF, Masc+Wiki, 10-fold CV. 6.6 Impact of genre In this section, we test to what extent our models are robust across genres. Table 10 shows F1scores for each SE type for two settings: the 101763 SE type genre-CV 10-fold CV humans STATE 78.2 80.6 82.8 EVENT 77.0 78.6 80.5 REPORT 76.8 78.9 81.5 GENERIC 44.8 68.3 75.1 GENERALIZING 27.4 29.4 45.8 IMPERATIVE 70.8 75.3 93.6 QUESTION 81.8 84.4 90.7 macro-avg F1 66.6 71.2 78.6 accuracy ∗71.8 ∗76.4 79.6 Table 10: Impact of in-genre training data. F1score by SE type, CRF, MASC+Wiki dev. fold CV setting as explained in section 6.2, and a genre-CV setting, simulating the case where no in-genre training data is available, treating each genre as one cross validation fold. As expected, in the latter setting, both overall accuracy and macroaverage F1 are lower compared to the case when in-genre training data is available. Nevertheless, our model is able to capture the nature of SE types across genres: the prediction of STATE, EVENT, REPORT and QUESTION is relatively stable even in the case of not having in-genre training data. An EVENT seems to be easily identifiable regardless of the genre. GENERIC SENTENCE is a problematic case; in the genre-CV setting, its F1-score drops by 23.5%. The main reason for this is that the distribution of SE types in Wikipedia differs completely from the other genres (see section 4). Precision for GENERIC SENTENCE is at 70.5% in the genre-CV setting, but recall is only 32.8% (compared to 70.1% and 66.6% in the 10-fold CV setting). Genericity seems to work differently in the various genres: most generics in Wikipedia clearly refer to kinds (e.g., lions or plants), while many generics in essays or letters are instances of more abstract concepts or generic you. Results by genre. Next, we drill down in the evaluation of our system by separately inspecting results for individual genres. Table 11 shows that performance differs greatly depending on the genre. In some genres, the nature of SE types seems clearer to our annotators than in others, and this is reflected in the system’s performance. The majority class is GENERIC SENTENCE in wiki, and STATE in all other genres. In the ‘same genre’ setting, 10-fold CV was performed within each genre. Adding out-of-genre training data improves macro-average F1 especially for genres with low scores in the ‘same genre’ setting. This boost is training data maj. same class genre all humans genre % F1 F1 F1 κ blog 57.6 57.3 64.9 72.9 0.62 email 68.6 63.6 66.4 67.0 0.65 essays 49.4 33.5 62.1 64.6 0.54 ficlets 44.7 60.2 65.7 81.7 0.80 fiction 45.8 63.0 66.0 76.7 0.77 govt-docs 60.9 26.6 67.6 72.6 0.57 jokes 34.9 66.2 69.8 82.0 0.77 journal 59.3 35.8 59.8 63.7 0.52 letters 57.3 51.9 65.1 68.0 0.66 news 52.2 54.6 64.1 78.6 0.75 technical 57.7 31.4 59.4 54.7 0.55 travel 25.9 39.9 58.1 48.9 0.59 wiki 51.6 53.1 63.0 69.2 0.66 Table 11: Macro-avg. F1 by genre, CRF, 10-fold CV. Majority class given in % of clauses. due to adding training data for types that are infrequent in that genre. Accuracy (not shown in table) improves significantly for blog, essays, govt-docs, jokes, and journal, and does not change for the remaining genres. We conclude that it is extremely beneficial to use our full corpus for training, as robustness of the system is increased, especially for SE types occurring infrequently in some genres. 7 Identification of Abstract Entities Our system notably does not address one of Smith’s main SE categories: Abstract Entities, introduced in Section 2. These SEs are expressed as clausal arguments of certain predicates such as (canonically) know or believe. Note that the Abstract Entity subtypes FACT and PROPOSITION do not imply that a clause’s propositional content is true or likely from an objective point of view, they rather indicate that the clause’s content is introduced to the discourse as an object of knowledge or belief, respectively. Following Smith (2003), we use PROPOSITION in a different sense than the usual meaning of “proposition” in semantics naturally situation entities of any type may have propositional content. Smith’s use of the term (and thus ours too) contrasts PROPOSITION with FACT - our PROPOSITIONs are simply sentences presented as a belief (or with uncertain evidential status) of the writer or speaker. This use of “proposition” also occurs in linguistic work by Peterson (1997) on factive vs. propositional predicates. During the creation of the corpus, annotators 1764 were asked to give one label out of the SE types included in our classification task, and to mark the clause with one of the Abstract Entity subtypes in addition if applicable. Analysis of the data shows that our annotators frequently forgot to mark clauses as Abstract Entities, which makes it difficult to model these categories correctly. As a first step toward resolving this issue, we implement a filter which automatically identifies Abstract Entities by looking for clausal complements of certain predicates. The list of predicates is compiled using WordNet synonyms of know, think, and believe, as well as predicates extracted from FactBank (Sauri and Pustejovsky, 2009) and TruthTeller (Lotan et al., 2013). Many of the clauses automatically identified as Abstract Entities are cases that annotators missed during annotation. We thus performed a post-hoc evaluation, presenting these clauses in context to annotators and asking whether the clause is an Abstract Entity. The so-estimated precision of our filter is 85.8% (averaged over 3 annotators). Agreement for this annotation task is κ = 0.54, with an observed agreement of 88.7%. Our filter finds 80% of the clauses labeled as Abstract Entity by at least one annotator in the gold standard; this is approximately its recall. 8 Conclusion We have presented a system for automatically labeling clauses with their SE type which is mostly robust to changes in genre and which reaches accuracies of up to 76%, comparing favorably to the human upper bound of 80%. The system benefits from capturing contexual effects by using a linear chain CRF with label bigram features. In addition, the distributional and targeted syntactic-semantic features we introduce enable SE type prediction for large and diverse data sets. Our publicly available system can readily be applied to any written English text, making it easy to explore the utility of SE types for other NLP tasks. Discussion. Our annotation scheme and guidelines for SE types (Friedrich and Palmer, 2014b) follow established traditions in linguistics and semantic theory. When applying these to a large number of natural texts, though, we came across a number of borderline cases where it is not easy to select just one SE type label. As we have reported before (Friedrich et al., 2015), the most difficult case is the identification of GENERIC SENTENCEs, which are defined as making a statement about a kind or class. We find that making this task becomes particularly difficult for argumentative essays (Becker et al., to appear). Future work. A next major step in our research agenda is to integrate SE type information into various applications, including argument mining, temporal reasoning, and summarization. Together with the mode of progression through the text, e.g., temporal or spatial, SE type distribution is a key factor for a reader or listener’s intuitive recognition of the discourse mode of a text passage. Therefore the automatic labeling of clauses with their SE type is a prerequisite for automatically identifying a passage’s discourse mode, which in turn has promising applications in many areas of NLP, as the mode of a text passage has implications for the linguistic phenomena to be found in the passage. Examples include temporal processing of text (Smith, 2008), summarization, or machine translation (for work on genres see van der Wees et al., 2015). Here we focus on the automatic identification of SE types, leaving the identification of discourse modes to future work. The present work, using the SE type inventory introduced by Smith (2003), is also the basis for research on more fine-grained aspectual type inventories. Among others, we plan to create subtypes of the STATE label, which currently subsumes clauses stativized by negation, modals, lexical information or other aspectual operators. Distinguishing these is relevant for temporal relation processing or veridicality recognition. Acknowledgments We thank the anonymous reviewers and Andrea Horbach for their helpful comments related to this work, and our annotators Melissa Peate Sørensen, Christine Bocionek, Kleo-Isidora Mavridou, Fernando Ardente, Damyana Gateva, Ruth K¨uhn and Ambika Kirkland. This research was supported in part by the MMCI Cluster of Excellence of the DFG, and the first author is supported by an IBM PhD Fellowship. The second author is funded by the Leibniz ScienceCampus Empirical Linguistics and Computational Language Modeling, supported by Leibniz Association grant no. SAS2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-W¨urttemberg. 1765 References Emmon Bach. 1986. The algebra of events. Linguistics and philosophy 9(1):5–16. Maria Becker, Alexis Palmer, and Anette Frank. to appear. Argumentative texts and clause types. In Proceedings of the 3rd Workshop on Argument Mining, ACL 2016. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics 18(4):467–479. Annelen Brunner. 2013. Automatic recognition of speech, thought, and writing representation in german narrative texts. Literary and linguistic computing 28(4):563–575. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20(1):37–46. Francisco Costa and Ant´onio Branco. 2012. Aspectual type and temporal relation classification. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Richard Eckart de Castilho and Iryna Gurevych. 2014. A broad-coverage collection of portable NLP components for building shareable analysis pipelines. In Proceedings of the Workshop on Open Infrastructures and Analysis Frameworks for HLT. Dublin, Ireland. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. David Ferrucci and Adam Lally. 2004. UIMA: an architectural approach to unstructured information processing in the corporate research environment. Natural Language Engineering 10(34):327–348. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin 76(5):378. W. Nelson Francis and Henry Kuˇcera. 1979. A Standard Corpus of Present-Day Edited American English, for use with Digital Computers. Brown University. Annemarie Friedrich and Alexis Palmer. 2014a. Automatic prediction of aspectual class of verbs in context. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL). Baltimore, USA. Annemarie Friedrich and Alexis Palmer. 2014b. Situation entity annotation. In Proceedings of the Linguistic Annotation Workshop VIII . Annemarie Friedrich, Alexis Palmer, Melissa Peate Srensen, and Manfred Pinkal. 2015. Annotating genericity: a survey, a scheme, and a corpus. In Proceedings of the 9th Linguistic Annotation Workshop (LAW IX). Denver, Colorado, US. Annemarie Friedrich and Manfred Pinkal. 2015a. Automatic recognition of habituals: a three-way classification of clausal aspect. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP). Lisbon, Portugal. Annemarie Friedrich and Manfred Pinkal. 2015b. Discourse-sensitive Automatic Identification of Generic Expressions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL). Beijing, China. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia . J¨urgen Hermes, Michael Richter, and Claes Neuefind. 2015. Automatic induction of German aspectual verb classes in a distributional framework. In Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology (GSCL). Nancy Ide, Collin Baker, Christiane Fellbaum, and Charles Fillmore. 2008. MASC: The manually annotated sub-corpus of American English . Nancy Ide, Christiane Fellbaum, Collin Baker, and Rebecca Passonneau. 2010. The manually annotated sub-corpus: A community resource for and by the people. In Proceedings of the ACL 2010 conference short papers. Dan Klein and Christopher D Manning. 2002. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems. Roman Klinger and Christoph M Friedrich. 2009. Feature subset selection in conditional random fields for named entity recognition. In Proceedings of Recent Advances in Natural Language Processing (RANLP). 1766 Roman Klinger and Katrin Tomanek. 2007. Classical probabilistic models and conditional random fields. TU Dortmund Algorithm Engineering Report . Manfred Krifka, Francis Jeffrey Pelletier, Gregory N. Carlson, Alice ter Meulen, Godehard Link, and Gennaro Chierchia. 1995. Genericity: An Introduction. The Generic Book pages 1–124. Klaus Krippendorff. 1980. Content analysis: An introduction to its methodology. Sage. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML). Percy Liang. 2005. Semi-supervised learning for natural language. Ph.D. thesis, University of California Berkeley. Sharid Loaiciga, Thomas Meyer, and Andrei Popescu-Belis. 2014. English-French Verb Phrase Alignment in Europarl for Tense Translation Modeling. In The Ninth Language Resources and Evaluation Conference (LREC). Amnon Lotan, Asher Stern, and Ido Dagan. 2013. TruthTeller: Annotating Predicate Truth. In Proceedings of NAACL 2013. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. Thomas A. Mathew and Graham E. Katz. 2009. Supervised categorization for habitual versus episodic sentences. In Sixth Midwest Computational Lingustics Colloquium. Indiana University, Bloomington, Indiana. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 12(2):153– 157. Alexis Palmer, Elias Ponvert, Jason Baldridge, and Carlota Smith. 2007. A sequencing model for situation entity classification. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Philip L Peterson. 1997. On representing event reference. In Fact Proposition Event, Springer, pages 65–90. Nils Reiter and Anette Frank. 2010. Identifying Generic Noun Phrases. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden. Roser Saur´ı, Jessica Littman, Bob Knippen, Rober Gaizauskas, Andrea Setzer, and James Pustejovsky. 2006. TimeML Annotation Guidelines, Version 1.2.1. Technical report. Roser Sauri and James Pustejovsky. 2009. Factbank 1.0 ldc2009t23. Web Download. Philadelphia: Linguistic Data Consortium. Eric V Siegel and Kathleen R McKeown. 2000. Learning methods to combine linguistic indicators: Improving aspectual classification and revealing linguistic insights. Computational Linguistics 26(4):595–628. Carlota S Smith. 2003. Modes of discourse: The local structure of texts, volume 103. Cambridge University Press. Carlota S Smith. 2005. Aspectual entities and tense in discourse. In Aspectual inquiries, Springer, pages 223–237. Carlota S Smith. 2008. Time with and without tense. In Time and modality, Springer, pages 227–249. Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Featurerich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Marlies van der Wees, Arianna Bisazza, Wouter Weerkamp, and Christof Monz. 2015. What’s 1767 in a Domain? Analyzing Genre and Topic differences in Statistical Machine Translation. In Proceedings of the 53rd Meeting of the Association for Computational Linguistics (ACL). Benjamin D Van Durme. 2010. Extracting implicit knowledge from text. Ph.D. thesis, University of Rochester. Zeno Vendler. 1957. Verbs and times. The philosophical review pages 143–160. Ian H Witten, Eibe Frank, Len Trigg, Mark Hall, Geoffrey Holmes, and Sally Jo Cunningham. 1999. Weka: Practical machine learning tools and techniques with java implementations . Stephen Wright and Jorge Nocedal. 1999. Numerical optimization. Springer Science 35:67–68. Alessandra Zarcone and Alessandro Lenci. 2008. Computational models of event type classification in context. In Proceedings of LREC2008. 1768
2016
166
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1769–1779, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning Prototypical Event Structure from Photo Albums Antoine Bosselut†, Jianfu Chen‡, David Warren‡, Hannaneh Hajishirzi† and Yejin Choi† †Computer Science & Engineering, University of Washington, Seattle, WA {antoineb, hannaneh, yejin}@cs.washington.edu ‡Department of Computer Science, Stony Brook University, Stony Brook, NY {jianchen, warren}@cs.stonybrook.edu Abstract Activities and events in our lives are structural, be it a vacation, a camping trip, or a wedding. While individual details vary, there are characteristic patterns that are specific to each of these scenarios. For example, a wedding typically consists of a sequence of events such as walking down the aisle, exchanging vows, and dancing. In this paper, we present a data-driven approach to learning event knowledge from a large collection of photo albums. We formulate the task as constrained optimization to induce the prototypical temporal structure of an event, integrating both visual and textual cues. Comprehensive evaluation demonstrates that it is possible to learn multimodal knowledge of event structure from noisy web content. 1 Introduction Many common scenarios in our lives, such as a wedding or a camping trip, show characteristic structural patterns. As illustrated in Figure 1, these patterns can be sequential, such as in a wedding, where exchanging vows generally happens before cutting the cake. In other scenarios, there may be a set of composing events, but no prominent temporal relations. A camping trip, for example, might include events such as hiking, which can happen either before or after setting up a tent. This observation on the prototypical patterns in everyday scenarios goes back to early artificial intelligence research. Scripts (Schank and Abelson, 1975), an early formalism, were developed to encode the necessary background knowledge to support an inference engine for common sense reasoning in limited domains. However, early ap-Ring 'me. -Exchanging our rings. -Rings and promises. Kiss -Our first ever kiss. -You may kiss the bride. -Sealed with a kiss. Cut the cake -Cake cuBng. -The cake was so solid. -Dancing excitement. -First dance. -Ballroom dancing. reading vows presen'ng rings best cake ever Photo albums down the aisle Prototypical Cap2ons: Exchange rings Dance Learned Events: -Reading our vows. -Our vows. Vows Temporal Knowledge: Dance … … Kiss Cake Vows Rings Figure 1: We collect photo albums of common scenarios (e.g., weddings) and cluster their images and captions to learn the hierarchical events that make up these scenarios. We use constrained optimization to decode the temporal order of these events, and we extract the prototypical descriptions that define them. proaches based on hand-coded symbolic representations proved to be brittle and difficult to scale. An alternative direction in recent years has been statistical knowledge induction, i.e., learning script or common sense knowledge bottom-up from large-scale data. While most prior work is based on text (Pichotta and Mooney, 2014; Jans et al., 2012; Chambers and Jurafsky, 2008; Chambers, 2013), recent work begins exploring the use of images as well (Bagherinezhad et al., 2016; Vedantam et al., 2015). In this paper, we present the first study for learning knowledge about common life scenarios (e.g., weddings, camping trips) from a large collection of online photo albums with time-stamped images and their captions. The resulting dataset includes 34,818 time-stamped photo albums corresponding to 12 distinct event scenarios with 1.5 million images and captions (see Table 1 for more details). We cast unsupervised learning of event structure as a sequential multimodal clustering prob1769 lem, which requires solving two subproblems concurrently: identifying the boundaries of events and assigning identities to each of these events. We formulate this process as constrained optimization, where constraints encode the temporal event patterns that are induced directly from the data. The outcome is a statistically induced prototypical structure of events characterized by their visual and textual representations. We evaluate the quality and utility of the learned knowledge in three tasks: temporal event ordering, segmentation prediction, and multimodal summarization. Our experimental results show the performance of our model in predicting the order of photos in albums, partitioning photo albums into event sequences, and summarizing albums. 2 Overview The high-level goal of this work is unsupervised induction of the prototypical event structure of common scenarios from multimodal data. We assume a two-level structure: high-level events, which we refer to as scenarios (e.g., wedding, funeral), are given, and low-level events (e.g., dance, kiss, vows), which we refer to as events, are to be automatically induced. In this section, we provide the overview of the paper (Section 2.1), and introduce our new dataset (Section 2.2). 2.1 Approach Given a large collection of photo albums corresponding to a scenario, we want to learn three aspects of event knowledge by (1) identifying events common to the given scenario (Section 4.1), (2) learning temporal relations across events (Section 4.2), and (3) extracting prototypical captions for each event (Section 4.3). To induce the prototypical event structure, an important subproblem we consider is individual photo album analysis, where the task is (1) partitioning each photo album into a sequence of segments, and (2) assigning the event identity to each segment. We present an inference model based on Integer Linear Programming (ILP) in Section 3 to perform both segmentation and event identification simultaneously, in consideration of the learned knowledge that we describe in Section 4. Finally, we evaluate the utility of the automatically induced knowledge in the context of three concrete tasks: temporal ordering of photos (Section 6.1), album segmentation (Section 6.2), and scenario # of albums # of images WEDDING 4689 192K MARATHON 3961 158K COOKING 1168 36K FUNERAL 781 28K BARBECUE 735 22K BABY BIRTH 688 21K PARIS TRIP 4603 306K NEW YORK TRIP 4205 267K CAMPING 4063 159K THANKSGIVING 5928 153K CHRISTMAS 3449 98K INDEPENDENCE DAY 548 22K TOTAL 34,818 1.5M Table 1: Dataset Statistics: the number of albums and images compiled for each scenario. The middle horizontal line separates the scenarios we predict have a well-defined temporal structure (top) from those we predict do not (bottom). photo album summarization (Section 6.3). 2.2 Dataset For this study, we have compiled a new corpus of multimodal photo albums across 12 distinct scenarios. It comprises of 34,818 albums containing 1.5 million pairs of online photographs and their textual descriptions. Table 1 shows the list of scenarios and the corresponding data statistics. We choose six scenarios (the top half of Table 1) that we expect have an inherent temporal event structure that can be learned and six that we expect do not (the bottom half of Table 1). The dataset is collected using the Flickr API1,2. We use the scenario names and variations of them (e.g., Paris Trip, Paris Vacation) as queries for images. We then form albums from these images by grouping images by user, sorting them by timestamp, and extracting groups that are within a contained time frame (e.g., 24 hours for a wedding, 5 days for a trip). For all images, we extract the first sentences of the corresponding textual descriptions as captions and also store their timestamps. This data is publicly available at https://www.cs.washington.edu/ projects/nlp/protoevents. 3 Inference Model for Multimodal Event Segmentation and Identification Given a photo album, the goal of the inference is to assign events to photos and to segment albums by event. More formally, given a sequence of M photos P = {p1, . . . , pM}, and N learned events E = {e1, . . . , eN}, the task is to assign each photo to a 1https://www.flickr.com/services/api/ 2https://pypi.python.org/pypi/flickrapi/1.4.5 1770 PL(kiss          dance)  =  .04   → vows   kiss   toasts   cut  cake   dancing   aisle   ready   dancing   ready   toasts   aisle   vows   kiss   Event  affinity  scores  Ac  Av   bc  :          .31                              .09                            .82   PL(dance          toast)  =  .17   PG(vows          toast)  =  .84   Learned    Events:   Photo  Album:   Event  Assignments  and  Segments:   bv  :          .69                              .32                            .91          …     ⇒ PG(vows            dance)  =  .79   ⇒ → φEvent φSeg φtemporal Figure 2: The events learned in Section 4 are assigned to photos based on textual (Ac) and visual (Av) affinities, which encode how well a photo represents an event (φevent). Segmentation scores (φseg) between adjacent photos encourage similar photos to be assigned the same event. Local transition, PL, and global pairwise ordering, PG, probabilities encode the learned temporal knowledge between events. φtemporal encourages event assignments toward a learned temporal structure of the scenario. single event. The event assignment can be viewed as a latent variable for each photo. We formulate a constrained optimization (depicted in Figure 2) that maximizes the objective function, F, which consists of three scoring components: (a) event assignment scores φevent (Section 3.1), (b) segmentation scores φseg (Section 3.2), and (c) temporal knowledge scores φtemporal (Section 3.3): F = φevent + φseg + φtemporal (1) Decision Variables. The binary decision variable Xi,k indicates that photo pi is assigned to event ek. The binary decision variable Zi,j,k,l indicates that photos pi and pj are assigned to events ek and el, respectively: Zi,j,k,l := Xi,k ∧Xj,l (2) 3.1 Event Assignment Scores Event assignment scores quantify the textual and visual affinity between a photo pi and an event ek. Affinities are measures of representation similarity between photos and events. These scores push photos displaying a certain event to be assigned to that event. For now we assume the textual affinity matrix Ac ∈[0, 1]M×N and the visual affinity matrix Av ∈[0, 1]M×N are given. We describe how we obtain these affinity matrices in Section 4.1. Event assignment scores are defined as the weighted sum of both textual and visual affinity: φevent = M X i=1 N X k=1  γceAc i,k + γveAv i,k  Xi,k (3) where Xi,k is a photo-event assignment decision variable, and γce and γve are hyperparameters that balance the contribution of both affinities. 3.2 Segmentation Scores Segmentation scores quantify textual and visual similarities between adjacent photos. These scores encourage similar adjacent photos to be assigned to the same event. We define a similarity score between adjacent photos equal to the weighted sum of their textual (bc) and visual (bv) similarities: φseg = M−1 X i=1 N X k=1  γcsbc i + γvsbv i  Zi,i+1,k,k (4) where bc, bv ∈[0, 1](M−1)×1 are vectors of textual and visual similarity scores between adjacent photos whose ith element corresponds to the similarity score between photos pi and pi+1, Z is a decision variable defined by Equation 2, and γcs and γvs are hyperparameters balancing the contribution of both types of similarity. The similarity scores in the b vectors are computed using cosine similarity of the feature representations of adjacent images in both the textual and visual modes. 3.3 Temporal Knowledge Scores Temporal knowledge scores quantify the compatibilities across different event assignments in terms of their relative ordering. For now, we assume two types of temporal knowledge matrices are given: L ∈[0, 1]N×N which stores local transition probabilities for every pair of events, ek and el, and G ∈[0, 1]N×N which stores global pairwise ordering probabilities for every pair of events, ek 1771 and el. We describe how we obtain these temporal knowledge matrices in Section 4.2. The temporal knowledge score, defined below, encourages the inference model to assign events that are compatible with the learned temporal knowledge: φtemporal = γlp M X i=0 N X k,l=1 Lk,lZi,i+1,k,l (5) + γgp M X i=1 M X j=i N X k,l=1 Gk,lZi,j,k,l where Z is a decision variable defined by Equation 2, and γlp and γgp are hyperparameters that balance the contribution of local and global temporal knowledge in the objective. 3.4 Constraints We include hard constraints that force each photo to be assigned to exactly one event: N X k=1 Xi,k = 1 (6) The number of these constraints is linear in the number of photos in an album. We also include hard constraints to ensure consistencies among binary decision variables X and Z: 1 2(Xi,k + Xj,l) −Zi,j,k,l ≥0 (7) which states that Zi,j,k,l can be 1 only if both Xi,k and Xj,l are 1. The number of constraints for segmentation scores and local transition probabilities is O(MN 2) because they model interactions between adjacent photos for all event pairs. The number of these constraints for global pairwise ordering probabilities is O(M2N2) because they model interactions between all pairs of photos in an album for all event pairs. 4 Learned Event Knowledge We learn base events for each scenario by clustering photos from training albums related to that scenario (Figure 3). As described in Section 3, these base events and their temporal knowledge are incorporated in a joint model for event induction in unseen albums. 4.1 Learned Event Representation We perform k-means clustering over captions to create a base event model. We perform text-only clustering at first since visual cues are significantly -Arc de Triomphe -At the Arc. Louvre Arc de Triomphe -Louvre Square. -Pyramid of the Louvre. Invalides -The Hotel des Invalides. -Cannons at les Invalides. Notre Dame -Notre Dame Cathedral. -Front of Notre Dame . Eiffel -I spy the Eiffel Tower. -Under the Eiffel. Local transition probabilities Global pairwise ordering probabilities PG(e5 e1) = .47 eiffel arc de triomphe PG(e2 e4) = .48 louvre notre dame PG(e3 e2) = .68 invalides louvre PL(e1 e5) = .24 arc de triomphe eiffel PL(e4 e3) = .01 notre dame invalides PL(e2 e5) = .12 louvre eiffel ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ → → → → → → Event Visual Center, eV Event Textual Center, eC Figure 3: Photos are clustered by their captions. We can compute the visual, ev k, and caption, ec k, centers for all the clusters, as well as the local transition, PL, and global pairwise ordering, PG, probabilities between these events based on the sequential patterns they exhibit in the training set. noisier. Because not all photos have informative captions, it is expected that this base clustering will form meaningful clusters only over a subset of the data. For each scenario, the largest cluster corresponds to the “miscellaneous” cluster as the captions in it tend to be relatively uninformative about specific events. This cluster is excluded when computing temporal knowledge probabilities (Section 4.2). The visual and textual representations of an event are computed using the average of the visual and textual features, respectively, of photos assigned to that event. We compute each textual affinity Ac i,k in the event assignment scores (Equation 3) as the cosine similarity between the textual features of the caption for photo pi and the textual representation of event ek. For textual features, we extract noun and verb unigrams using TurboTagger (Martins et al., 2013) and weigh them by their discriminativeness relative to their scenario, P(S|w). Given scenario S and word w, P(S|w) is defined as the number of albums for the scenario the word occurs in divided by the total number of albums in that scenario. The visual affinity Av i,k is the similarity between the visual features of photo pi and the visual representation of event ek. For visual features, we use the convolutional features from the final layer activations of the 16-layer VGGNet model (Simonyan and Zisserman, 2015). 4.2 Temporal Knowledge Local transition probabilities. These probabilities, denoted as PL, encode an expected sequence of events using temporal patterns among adjacent 1772 Wedding Camping Funeral aisle Walking down the aisle tent Inside our tent service Graveside service Bride walking down the aisle Setting up the tent The service vow Exchanging vows fire Building the Fire pay respect Paying Respects Reading the vows Getting the Fire going Respect Reciting vows to each other Around the Fire dance First Dance sunset Watching the Sunset goodbye Saying Goodbye Everybody Dancing Sunset from camp Dancing the Night Away Sunset on the first night Table 2: Sample learned events and prototypical captions photos. We model PL for each pair of events as a multinomial distribution, PL(ek →el) = C(ek →el) PN m=1 C(ek →em) (8) where C is the observed counts of that specific event transition. This is the likelihood that an event ek is immediately followed by event el. Global pairwise ordering probabilities. These probabilities, denoted as PG, encode global structural patterns about events. We model PG for each pair of events as a binomial distribution by computing the likelihood that an event occurs before another at any point in an album, PG(ek ⇒el) = C(ek ⇒el) C(ek ⇒el) + C(el ⇒ek) (9) where C(ek ⇒el) is the observed counts of ek occurring anytime before el in all photo albums. These global probabilities model relations among events assigned to all photos in the album, not just events assigned to photos that are adjacent to one another. This distinction is important because these probabilities can encode global patterns between events and are not limited to modeling a sequential event chain. We use these learned temporal probabilities, PL and PG, in matrices L and G from φtemporal (Equation 5). These matrices are used to index local transition probabilities and global pairwise ordering probabilities for pairs of events when computing temporal knowledge scores in the inference model (Section 3.3). 4.3 Prototypical Captions After clustering the photos, the representative language of the captions in each cluster begins to tell a story about each scenario. The event names are automatically extracted using the most common content words among captions in the cluster. For each cluster, we also compile prototypical captions by extracting captions whose lemmatized forms are frequently observed throughout multiple albums in the scenario. Sample events and their prototypical captions from three scenarios are displayed in Table 2. 5 Experimental Setup Data split. For scenarios with more than 1000 albums, we use 100 albums for each of the development and test sets and use the rest for training. For scenarios with less than 1000 albums, we use 50 albums for each of the development and test sets, and the rest for training. Implementation details. We optimize our objective function using integer linear programming (Roth and Yih, 2004) with the Gurobi solver (Inc., 2015). For computational efficiency, temporally close sets of consecutive photos are treated as one unit during the optimization. We use these units to reduce the number of variables and constraints in the model from a function of the number of photos to a function of the number of units. We form these units heuristically by merging images agglomeratively when their timestamps are within a certain range of the closest image in a unit. When merging photos, the textual affinity of each unit for a particular event is the maximum affinity for that event among photos in that unit. The visual affinity of each unit is the average of all affinities for that event among photos in that unit. The textual and visual similarities of consecutive units are defined in terms of the similarities between the two photos at the units’ boundary. Temporal information for events not aligned well with a particular unit should not influence the objective, so we include temporal scores only for unit-event pairs which have both textual and visual event assignment scores greater than 0.05. Hyperparameters. We tune the hyperparameters using grid search on the development set. In models where the corresponding objective components are included, we set γce = 1, γve = 1, γcs = .5, γvs = .15, γlp = 1, and γgp = 4 Q (where Q is the 1773 Model Wedding Baby Birth Marathon Cooking Funeral Barbecue Indep. Day Camping Thanksgiving Paris Trip NY Trip Christmas k-MEANS 52.7 52.5 53.8 53.0 50.5 50.2 53.2 52.3 51.4 53.1 50.3 51.4 NO TEMPORAL 58.6 66.3 62.6 56.5 50.8 51.7 58.0 52.6 54.3 51.7 49.1 50.9 FULL MODEL 60.0 66.5 64.5 63.2 53.1 58.6 56.0 55.5 56.1 52.3 48.5 52.4 Table 3: Temporal ordering pairwise photo results. The metric reported is accuracy, the percentage of time the correct photo is picked as coming first based on the event assigned to it. Scenarios with an expected temporal structure are in the left half of the table. number of event units). For k-means clustering, we use 10 random restarts and 40 cluster centers for the WEDDING, CAMPING, PARIS TRIP, and NY TRIP scenarios. For all other scenarios, we use 30 cluster centers. 6 Experimental Results We evaluate the performance of our model on three tasks. The first task evaluates the effect of learned temporal knowledge in predicting the correct order of photos in an unseen album (Section 6.1). The second task evaluates the model’s ability to segment albums into logical groupings (Section 6.2). The third task evaluates the quality of prototypical captions and their use in photo album summarization (Section 6.3). 6.1 Temporal Ordering of Photos We evaluate the model’s ability to capture the temporal relationships between events in the scenario. Given two randomly selected photos pi and pj from an album, the task is to predict which of the photos appears earlier in the album using their event assignments. We compare the full model that assigns events to photos using ILP (Section 3) with two baselines: k-MEANS , which assigns events to photos using k-means clustering over captions (Section 4), and NO TEMPORAL: a variant of the full model that does not use temporal knowledge scores (φtemporal in Equation 1) for optimization. We run each method over a test photo album, in which the events ek and el are assigned to the photos pi and pj, respectively. We then use the learned global pairwise ordering probabilities (Section 4.2) to predict which photo appears earlier in the album. We report the accuracy of each method in predicting the order of photos compared to the actual order of photos in the albums. We perform this experiment 50 times for each album and average the number of correct choices across every album and every trial. Results. Table 3 reports the results of the full model compared to the baselines. The results show that temporal knowledge generally helps in predicting photo ordering. We observe that the full model achieves higher scores for scenarios for which we expect would have a sequential structure (e.g., WEDDING, BABY BIRTH, MARATHON). Conversely, the full model achieves lower overall scores in non-sequential scenarios (e.g., PARIS TRIP, NEW YORK TRIP). Qualitatively, we notice interesting temporal patterns such as the fact that during a marathon, the starting line occurs before the medal awards with 92.3% probability, or that Parisian tourists have a 24% chance (∼10× higher than random chance) of visiting the Eiffel Tower immediately after the Arc de Triomphe (a high local transition probability that correctly implies their real world proximity). 6.2 Album Segmentation Our model partitions photos in albums into coherent events. The album segmentation evaluation tests if the model recovers the same sequences of photos that a human would identify in a photo album as events. Evaluation. We had an impartial annotator label where they thought events began and ended in 10 candidate albums of greater than 100 photos for three scenarios: WEDDING, FUNERAL, CAMPING. We evaluate how well our model can replicate these boundaries with two metrics. The first metric is the F1 score of recovering the same boundaries annotated by humans. The second metric is d, the difference between the number of events segmented by the model compared to the annotated albums. We report results for exact event boundaries as well as relaxed boundaries where the start of an event can be r photos away from the start of an annotated event, where r is the relaxation coefficient. For reference, we note that albums in the wedding scenario were dual annotated and the agreement between annotators is 56.9% for r = 0 and 77.5% for r = 2. Results. Table 4 shows comparison of the the full ILP model with same baselines we described before, k-MEANS and NO TEMPORAL. The table shows that the full model generally outperforms the kMEANS baseline for all three scenarios. In the WEDDING scenario, the F1 score for the full 1774 Model r WEDDING FUNERAL CAMPING F1 d F1 d F1 d k-MEANS 0 27.1 32.9 27.9 29.6 31.2 46.0 NO TEMPORAL 32.0 -5.6 35.9 .9 22.0 -15.6 FULL MODEL 37.8 1.3 32.2 4.2 27.5 -10.1 k-MEANS 2 40.8 32.9 38.3 29.6 46.2 46.0 NO TEMPORAL 49.6 -5.6 57.6 .9 35.4 -15.8 FULL MODEL 57.5 1.3 51.6 5.0 51.4 -10.1 Table 4: Segmentation results for our full model. F1 scores how often our model recovers the same boundaries annotated by humans. d is the average difference between the number of events identified by the model in an album and marked by annotators. r is the relaxation coefficient. Feature Group Excluded P R F1 d FULL MODEL 36.7 42.8 37.8 1.3 - Visual Event Affinity 37.7 37.0 35.3 -1.7 - Textual Segmentation 37.1 41.5 37.4 .8 - Visual Segmentation 35.1 42.1 36.5 1.7 - Local Ordering Probs. 36.9 40.3 36.9 .2 - Global Ordering Probs. 40.5 25.0 29.5 -5.8 Table 5: Ablation study of objective function components for the wedding scenario. P, R, and F1 are the precision, recall and F-measure of recovering the same boundaries annotated by humans. d is the average difference between the number of events identified by our models and the annotators. model is consistently higher. The k-MEANS baseline oversamples the number of events in albums, which is indicated by an average d significantly greater than 0. For the FUNERAL scenario, the NO TEMPORAL baseline outperforms the full model. We attribute this difference to the smaller data subset (see Table 1) making it harder to learn the temporal relations in the scenario, which makes the contributions of the local and global temporal probabilities unexpected. In the CAMPING scenario, the F1 score for the k-MEANS baseline is higher than that of the full model when r = 0. At a highlevel, CAMPING is a scenario we expect has less of a known structure compared to other scenarios and may be harder to segment into its events. Ablation Study. Table 5 depicts the performance of ablations of the full model for the wedding scenario. Results show that removing any component of the objective functions yields lower recall and F1 scores than the full model for r = 0. The exception is removing local ordering probabilities, which yields a higher d. These observations support the hypothesis that all of the components of the objective function contribute to segmenting the album into subsequences of photos depicting the same event. Particularly, we note the degradation when removing the global ordering probabilities, indicating that approaches which model only local event transitions such as hidden Markov models would not be suitable for this task. 6.3 Photo Album Summarization The final experiment evaluates how our learned prototypical captions can improve downstream tasks such as summarization and captioning. 6.3.1 Summaries The goal of a good summary is to select the most salient pictures of an album. In our setting, a good summary should have a high coverage of the events in an album and choose the photos that most appropriately depict these events. Given a photo budget b, we choose a subset of photos that aims for these goals. To summarize a test album, we run our model over the entire album. This will yield h unique events assigned to the photos in the album. For each of these h events, we choose the photo with the highest event assignment score for that event (Equation 3) to be in the summary. If h > b, we count the number of photos in the training set assigned to each of the h events and choose the photos corresponding to the b events with the largest membership of photos in the training set. If h < b, we complete the summary with b −h photos from the “miscellaneous” event that are spaced evenly throughout the album. Finally, we replace the caption of each selected photo with a prototypical caption (Section 4.3) for the assigned event. Baseline. We evaluate against two baselines. The first baseline, KTH, involves including a photo in the summary every k = M/b photos. The second baseline, k-MEANS, uses the events assigned to photos from k-means clustering and then picks b photos in the same manner as our main model. Evaluation. We evaluate the summaries produced by each method with a human evaluation using Amazon Mechanical Turk (AMT). We use albums from the test set that contain more than 40 photos for the wedding scenario. For each album, at random, we present two summaries generated by two algorithms. AMT workers are instructed to choose the better summary considering both the images and the captions. For each comparison of two summaries for an album, we aggregate answers from three workers by majority voting. We set b = 7. The number of assigned events in an album, h, varies by album. Results. As seen in Table 6, the summary from the full model is preferred 57.7% of the time compared to the KTH baseline. The summaries generated using the full model perform slightly better than the summaries from k-MEANS. We attribute 1775 Figure 4: Example summaries from the wedding, Paris trip, and baby birth scenarios. In cases where the album had less events than b, the additionally chosen photos are outlined in red. These photos do not have their caption replaced by a prototypical captions and merely fill out the summary. Method Selection Rates FULL MODEL vs. KTH 57.7 42.3 FULL MODEL vs. k-MEANS 53.8 47.2 k-MEANS vs. KTH 53.8 47.2 Table 6: Summarization results. The selection rates indicate the percentage of time the corresponding method in the left-most column was picked. the superior performance of the full model to the fact that it redistributes photos with noisy captions throughout the events, allowing for a larger sample to estimate visual representations of events, yielding more accurate visual affinity measurements to choose the summarization photos. As can be seen from qualitative examples in Figure 4, the photos chosen and the captions assigned cover key events that would occur during the scenario and describe them in a coherent way. Additional examples are available at https://www.cs.washington.edu/ projects/nlp/protoevents. 6.3.2 Prototypical Captions We also evaluate the quality of the prototypical captions assigned to every photo in the summaries. For each album, we use the same sets of b photos from the full model in the summarization task and evaluate the quality of the prototypical captions paired with that group of photos. Evaluation. We evaluate the quality of captions assigned to every photo by asking AMT workers to rate the captions on three different metrics: grammaticality, relevance to the scenario to which the image belongs, and relevance to its paired imMethod Scenario Image Grammar Relevance Relevance LSTM 4.90 2.85 3.74 FULL MODEL 4.55 3.66 4.08 RAW CAPTIONS 4.10 4.36 4.28 Table 7: Captioning results. We evaluate the caption quality of the prototypical captions of the full model, those generated by an LSTM trained on the raw captions, and original captions. Captions were evaluated on 3 metrics: grammatical correctness, how relevant they were to the scenario, and how relevant they were to their assigned image. age. Five AMT workers rate each group of b photos on a five point Likert scale for each metric. We compare the prototypical captions for every photo in the summary with captions generated by an LSTM model3 trained on every photo-caption pair in the training set for a scenario. We also compare with the original raw captions for each image in the summary. Because we chose photos with the highest event assignment scores (Equation 3) to be in the summary, the raw captions for this evaluation are cleaner and more descriptive than most captions in the dataset. Results. Our model outperforms the LSTMgenerated captions in the image relevance and grammaticality scores, but did worse in scenario 3We use a single-layer encoder-decoder LSTM. The cell state and the input embedding dimensions are 256. Visual inputs are the final layer convolutional features of the VGG-16 model and are fine-tuned during training. We use RMSprop to train the network with a base learning rate of .0001 and 30% dropout. We train the model for 45 epochs on a single NVIDIA Titan X GPU with mini batch size 100. To decode, we use beam search with beam size 5. 1776 relevance. We attribute this result to LSTM captions having little caption variation because the model learns frequency statistics without any knowledge of latent events. Almost all LSTM captions mention the words bride, wedding, or groom, yielding a very high scenario score for the caption, even if that caption is grammatically incorrect or irrelevant to the image. As expected the raw captions have high relevance to the original image, and they are grammatical, but can be less relevant to the corresponding scenario. 7 Related Work Previous studies have explored unsupervised induction of salient content structure in newswire texts (Barzilay and Lee, 2004), temporal graph representations (Bramsen et al., 2006), and storyline extraction and event summarization (Xu et al., 2013). Another line of research finds the common event structure from children’s stories (McIntyre and Lapata, 2009), where the learned plot structure is used to stochastically generate new stories (Goyal et al., 2010; Goyal et al., 2013). Our work similarly aims to learn the typical temporal patterns and compositional elements that define common scenarios, but with multimodal integration. Compared to studies that learn narrative schemas from natural language (Pichotta and Mooney, 2014; Jans et al., 2012; Chambers and Jurafsky, 2009; Chambers, 2013; Cassidy et al., 2014), or compile script knowledge from crowdsourcing (Regneri et al., 2010), our work explores a new source of knowledge that allows grounded event learning with temporal dimensions, resulting in a new dataset of scenario types that are not naturally accessible from newswire or literature. While recent studies have explored videos and photo streams as a source of discovering complex events and learning their sequential patterns (Kim and Xing, 2014; Kim and Xing, 2013; Tang et al., 2012; Tschiatschek et al., 2014), their focus was mostly on the visual modality. Zhang et al. (2015) explored multimodal information extraction focusing specifically on identifying video clips that referred to the same event in television news. This contrasts to the goal of our study that aims to learn the temporal structure by which common scenarios unfold. Integrating language and vision has attracted increasing attention in recent years across diverse tasks such as image captioning (Karpathy and FeiFei, 2015; Vinyals et al., 2015; Fang et al., 2015; Xu et al., 2015; Chen et al., 2015), cross modal semantic modeling (Lazaridou et al., 2015), information extraction (Morency et al., 2011; Rosas et al., 2013; Zhang et al., 2015; Izadinia et al., 2015), common-sense knowledge (Vedantam et al., 2015; Bagherinezhad et al., 2016), and visual storytelling (Huang et al., 2016). Our work is similar to both common sense knowledge learning and visual story completion. Our model learns commonsense knowledge on the hierarchical and temporal event structure from scenario-specific multimodal photo albums, which can be viewed as visual stories about common life events. Recent work focused on photo album summarization using visual (Sadeghi et al., 2015) and multimodal representations (Sinha et al., 2011). Our work identifies the nature of common events in scenarios and learns their timelines and characteristic forms. 8 Conclusion We introduce a novel exploration to learn scriptlike knowledge from photo albums. We model stochastic event structure to learn both the event representations (textual and visual) and the temporal relations among those events. Our event induction method incorporates learned knowledge about events, partitions photo albums into segments, and assigns events to those segments. We show the significance of our model in learning and using learned knowledge for photo ordering, album segmentation, and summarization. Finally, we provide a dataset depicting 12 scenarios with ∼1.5 M images for future research. Future directions could include exploring nuances in the type of temporal knowledge that can be learned across different scenarios. Acknowledgements We thank the anonymous reviewers for many insightful comments and members of UW NLP for feedback and support. The work is in part supported by NSF grants IIS-1408287, IIS-1524371, IIS-1616112, DARPA under the CwC program through the ARO (W911NF-15-1-0543), the Allen Institute for AI (66-9175), the Allen Distinguished Investigator Award, and gifts by Google and Facebook. 1777 References Hessam Bagherinezhad, Hannaneh Hajishirzi, Yejin Choi, and Ali Farhadi. 2016. Are elephants bigger than butterflies? reasoning about sizes of objects. In Proceedings of the Conference in Artificial Intelligence (AAAI). Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In NAACL-HLT. Philip Bramsen, Pawan Deshpande, Yoong Keok Lee, and Regina Barzilay. 2006. Inducing temporal graphs. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 189–198. Association for Computational Linguistics. Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In ACL. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In ACL. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 602–610. Association for Computational Linguistics. Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Jianfu Chen, Polina Kuznetsova, David S Warren, and Yejin Choi. 2015. D´eja image-captions: A corpus of expressive descriptions in repetition. In NAACLHLT. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K. Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. Amit Goyal, Ellen Riloff, and Hal Daum´e III. 2010. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 77–86. Association for Computational Linguistics. Amit Goyal, Ellen Riloff, et al. 2013. A computational model for plot units. Computational Intelligence, 29(3):466–488. Ting-Hao Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, and Margaret Mitchell. 2016. Visual storytelling. In NAACL. Gurobi Optimization Inc. 2015. Gurobi optimizer reference manual. Hamid Izadinia, Fereshteh Sadeghi, Santosh K Divvala, Hannaneh Hajishirzi, Yejin Choi, and Ali Farhadi. 2015. Segment-phrase table for semantic segmentation, visual entailment and paraphrasing. In Proceedings of the IEEE International Conference on Computer Vision, pages 10–18. Bram Jans, Steven Bethard, Ivan Vuli, and MarieFrancine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 336–344. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. Gunhee Kim and Eric P Xing. 2013. Jointly aligning and segmenting multiple web photo streams for the inference of collective photo storylines. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 620–627. IEEE. Gunhee Kim and Eric Xing. 2014. Reconstructing storyline graphs for image recommendation from web community photos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3882–3889. Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015. Combining language and vision with a multimodal skip-gram model. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 153–163, Denver, Colorado, May–June. Association for Computational Linguistics. Andr´e FT Martins, Miguel B Almeida, and Noah A Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In ACL. Neil McIntyre and Mirella Lapata. 2009. Learning to tell tales: A data-driven approach to story generation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 217–225. Association for Computational Linguistics. Louis-Philippe Morency, Rada Mihalcea, and Payal Doshi. 2011. Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the 13th international conference on multimodal interfaces, pages 169–176. ACM. 1778 Karl Pichotta and Raymond J Mooney. 2014. Statistical script learning with multi-argument events. In EACL, volume 14, pages 220–229. Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 979–988. Association for Computational Linguistics. Ver´onica P´erez Rosas, Rada Mihalcea, and LouisPhilippe Morency. 2013. Multimodal sentiment analysis of spanish online videos. IEEE Intelligent Systems, (3):38–45. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. Technical report, DTIC Document. Fereshteh Sadeghi, J Rafael Tena, Ali Farhadi, and Leonid Sigal. 2015. Learning to select and order vacation photographs. In Applications of Computer Vision (WACV), 2015 IEEE Winter Conference on, pages 510–517. IEEE. Roger C Schank and Robert P Abelson. 1975. Scripts, plans, and knowledge. Yale University. K. Simonyan and A. Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR). Pinaki Sinha, Sharad Mehrotra, and Ramesh Jain. 2011. Summarization of personal photologs using multidimensional content and context. In Proceedings of the 1st ACM International Conference on Multimedia Retrieval, page 4. ACM. Kevin Tang, Li Fei-Fei, and Daphne Koller. 2012. Learning latent temporal structure for complex event detection. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1250–1257. IEEE. Sebastian Tschiatschek, Rishabh K Iyer, Haochen Wei, and Jeff A Bilmes. 2014. Learning mixtures of submodular functions for image collection summarization. In Advances in Neural Information Processing Systems, pages 1413–1421. R. Vedantam, X. Lin, T. Batra, C. L. Zitnick, and D. Parikh. 2015. Learning common sense through visual abstraction. In Proceedings of the International Conference in Computer Vision (ICCV). Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. Shize Xu, Shanshan Wang, and Yan Zhang. 2013. Summarizing complex events: a cross-modal solution of storylines extraction and reconstruction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1281–1291. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of The 32nd International Conference on Machine Learning, pages 2048–2057. Tongtao Zhang, Hongzhi Li, Heng Ji, and Shih-Fu Chang. 2015. Cross-document event coreference resolution based on cross-media features. In Proc. Conference on Empirical Methods in Natural Language Processing. 1779
2016
167
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1780–1790, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Cross-Lingual Image Caption Generation Takashi Miyazaki∗ Yahoo Japan Corporation Tokyo, Japan [email protected] Nobuyuki Shimizu∗ Yahoo Japan Corporation Tokyo, Japan [email protected] Abstract Automatically generating a natural language description of an image is a fundamental problem in artificial intelligence. This task involves both computer vision and natural language processing and is called “image caption generation.” Research on image caption generation has typically focused on taking in an image and generating a caption in English as existing image caption corpora are mostly in English. The lack of corpora in languages other than English is an issue, especially for morphologically rich languages such as Japanese. There is thus a need for corpora sufficiently large for image captioning in other languages. We have developed a Japanese version of the MS COCO caption dataset and a generative model based on a deep recurrent architecture that takes in an image and uses this Japanese version of the dataset to generate a caption in Japanese. As the Japanese portion of the corpus is small, our model was designed to transfer the knowledge representation obtained from the English portion into the Japanese portion. Experiments showed that the resulting bilingual comparable corpus has better performance than a monolingual corpus, indicating that image understanding using a resource-rich language benefits a resource-poor language. 1 Introduction Automatically generating image captions by describing the content of an image using natural language sentences is a challenging task. It is especially challenging for languages other than En∗∗Both authors contributed equally to this work. glish due to the sparsity of annotated resources in the target language. A promising solution to this problem is to create a comparable corpus. To support the image caption generation task in Japanese, we have annotated images taken from the MS COCO caption dataset (Chen et al., 2015b) with Japanese captions. We call our corpus the “YJ Captions 26k Dataset.” While the size of our dataset is comparatively large with 131,740 captions, it greatly trails the 1,026,459 captions in the MS COCO dataset. We were thus motivated to transfer the resources in English (source language) to Japanese and thereby improve image caption generation in Japanese (target language). In natural language processing, a task involving transferring information across languages is known as a cross-lingual natural language task, and well known tasks include cross-lingual sentiment analysis (Chen et al., 2015a), cross-lingual named entity recognition (Zirikly and Hagiwara, 2015), cross-lingual dependency parsing (Guo et al., 2015), and cross-lingual information retrieval (Funaki and Nakayama, 2015). Existing work in the cross-lingual setting is usually formulated as follows. First, to overcome the language barrier, create a connection between the source and target languages, generally by using a dictionary or parallel corpus. Second, develop an appropriate knowledge transfer approach to leverage the annotated data from the source language for use in training a model in the target language, usually supervised or semi-supervised. These two steps typically amount to automatically generating and expanding the pseudo-training data for the target language by exploiting the knowledge obtained from the source language. We propose a very simple approach to crosslingual image caption generation: exploit the English corpus to improve the performance of image caption generation in another language. In this ap1780 proach, no resources besides the images found in the corpus are used to connect the languages, and we consider our dataset to be a comparable corpus. Paired texts in a comparable corpus describe the same topic, in this case an image, but unlike a parallel corpus, the texts are not exact translations of each other. This unrestrictive setting enables the model to be used to create image caption resources in other languages. Moreover, this model scales better than creating a parallel corpus with exact translations of the descriptions. Our transfer model is very simple. We start with a neural image caption model (Vinyals et al., 2015) and pretrain it using the English portion of the corpus. We then remove all of the trained neural network layers except for one crucial layer, the one closest to the vision system. Next we attach an untrained Japanese generation model and train it using the Japanese portion of the corpus. This results in improved generation in Japanese compared to using only the Japanese portion of the corpus. To the best of our knowledge, this is the first paper to address the problem of cross-lingual image caption generation. Our contribution is twofold. First, we have created and plan to release the first ever significantly large corpus for image caption generation for the Japanese language, forming a comparable corpus with existing English datasets. Second, we have created a very simple model based on neural image caption generation for Japanese that can exploit the English portion of the dataset. Again, we are the first to report results in cross-lingual image caption generation, and our surprisingly simple method improves the evaluation metrics significantly. This method is well suited as a baseline for future work on cross-lingual image caption generation. The paper is organized as follows. In the next section, we describe related work in image caption generation and list the corpora currently available for caption generation. Then in Section 3 we present the statistics for our corpus and explain how we obtained them. We then explain our model in Section 4 and present the results of our experimental evaluation in Section 5. We discuss the results in Section 6, and conclude in Section 7 with a summary of the key points. 2 Related Work Recent advances in computer vision research have led to halving the error rate between 2012 and 2014 at the Large Scale Visual Recognition Challenge (Russakovsky et al., 2015), largely driven by the adoption of deep neural networks (Krizhevsky et al., 2012; Simonyan and Zisserman, 2014; Donahue et al., 2014; Sharif Razavian et al., 2014). Similarly, we have seen increased adaptation of deep neural networks for natural language processing. In particular, sequence-to-sequence training using recurrent neural networks has been successfully applied to machine translation (Cho et al., 2014; Bahdanau et al., 2015; Sutskever et al., 2014; Kalchbrenner and Blunsom, 2013). These developments over the past few years have led to renewed interest in connecting vision and language. The encoder-decoder framework (Cho et al., 2014) inspired the development of many methods for generating image captions since generating an image caption is analogous to translating an image into a sentence. Since 2014, many research groups have reported a significant improvement in image caption generation due to using a method that combines a convolutional neural network with a recurrent neural network. Vinyals et al. used a convolutional neural network (CNN) with inception modules for visual recognition and long short-term memory (LSTM) for language modeling (Vinyals et al., 2015). Xu et al. introduced an attention mechanism that aligns visual information and sentence generation for improving captions and understanding of model behavior (Xu et al., 2015). The interested reader can obtain further information elsewhere (Bernardi et al., 2016). These developments were made possible due to a number of available corpora. The following is a list of available corpora that align images with crowd-sourced captions. A comprehensive list of other kinds of corpora connecting vision and language, e.g., visual question answering, is available elsewhere (Ferraro et al., 2015). 1. UIUC Pascal Dataset (Farhadi et al., 2010) includes 1,000 images with 5 sentences per image; probably one of the first datasets. 2. Abstract Scenes Dataset (Clipart) (Zitnick et al., 2013) contains 10,020 images of children playing outdoors associated with 60,396 descriptions. 1781 3. Flickr 30K Images (Young et al., 2014) extends Flickr datasets (Rashtchian et al., 2010) and contains 31,783 images of people involved in everyday activities. 4. Microsoft COCO Dataset (MS COCO) (Lin et al., 2014; Chen et al., 2015b) includes about 328,000 images of complex everyday scenes with common objects in naturally occurring contexts. Each image is paired with five captions. 5. Japanese UIUC Pascal Dataset (Funaki and Nakayama, 2015) is a Japanese translation of the UIUC Pascal Dataset. To the best of our knowledge, there are no large datasets for image caption generation except for English. With the release of the YJ Captions 26k dataset, we aim to remedy this situation and thereby expand the research horizon by exploiting the availability of bilingual image caption corpora. 3 Statistics for Data Set In this section we describe the data statistics and how we gathered data for the YJ Captions 26k dataset. For images, we used the Microsoft COCO dataset (Chen et al., 2015b). The images in this dataset were gathered by searching for pairs of 80 object categories and various scene types on Flickr. They thus tended to contain multiple objects in their natural context. Objects in the scene were labeled using per-instance segmentations. This dataset contains pictures of 91 basic object types with 2.5 million labeled instances. To collect Japanese descriptions of the images, we used Yahoo! Crowdsourcing 1, a microtask crowdsourcing service operated by Yahoo Japan Corporation. Given 26,500 images taken from the training part of the MS COCO dataset, we collected 131,740 captions in total. The images had on average 4.97 captions; the maximum number was 5 and the minimum was 3. On average, each caption had 23.23 Japanese characters. We plan to release the YJ Captions 26k dataset 2. 3.1 Crowdsourcing Procedure Our captions were human generated using Yahoo! Crowdsourcing. As this crowdsourcing platform is operated in Japan, signing up for the service and participating require Japanese proficiency. Thus, 1http://crowdsourcing.yahoo.co.jp 2http://research-lab.yahoo.co.jp/software/index.html Figure 1: User Interface we assumed that the participants were fluent in Japanese. First, we posted a pilot task that asked the participants to describe an image. We then examined the results and selected promising participants (comprising a “white list”) for future task requests. That is, only the participants on the white list could see the next task. This selection process was repeated, and the final white list included about 600 participants. About 150 of them regularly participated in the actual image caption collection task. We modified the task request page and user interface on the basis of our experience with the pilot task. In order to prevent their fatigue, the tasks were given in small batches so that the participants were unable to work over long hours. In our initial trials, we tried a direct translation of the instructions used in the MS-COCO English captions. This however did not produce Japanese captions comparable to those in English. This is because people describe what appears unfamiliar to them and do not describe things they take for granted. Our examination of the results from the pilot tasks revealed that the participants generally thought that the pictures contained non-Japanese people and foreign places since the images originated from Flickr and no scenery from Japan was included in the image dataset. When Japanese 1782 crowds are shown pictures with scenery in the US or Europe in MS-COCO dataset, the scenes themselves appear exotic and words such as ‘foreign’ and ‘oversea’ would be everywhere in the descriptions. As such words are not common in the original dataset, and to make the corpus nicer complement to the English dataset and to reduce the effects of such cultural bias, we modified the instructions: “2. Please give only factual statements”; “3. Please do not specify place names or nationalities.” We also strengthened two sections in the task request page and added more examples. The interface is shown in Figure 1. The instructions in the user interface can be translated into English as “Please explain the image using 16 or more Japanese characters. Write a single sentence as if you were writing an example sentence to be included in a textbook for learning Japanese. Describe all the important parts of the scene; do not describe unimportant details. Use correct punctuation. Write a single sentence, not multiple sentences or a phrase.” Potential participants are shown task request pages, and the participants select which crowdsourcing task(s) to perform. The task request page for our task had the following instructions (English translation): 1. Please explain an image using 16 or more Japanese characters. Please write a single sentence as if you were writing an example sentence to be included in a textbook for learning Japanese. (a) Do not use incorrect Japanese. (b) Use a polite style of speech (desu/masu style) as well as correct punctuation. (c) Write a single complete sentence that ends with a period. Do not write just a phrase or multiple sentences. 2. Please give only factual statements. (a) Do not write about things that might have happened or might happen in the future. Do not write about sounds. (b) Do not speculate. Do not write about something about which you feel uncertain. (c) Do not state your feelings about the scene in the picture. Do not use an overly poetic style. (d) Do not use a demonstrative pronoun such as ’this’ or ’here.’ 3. Please do not specify place names or nationalities. (a) Please do not give proper names. 4. Please describe all the important parts of the scene; do not describe unimportant details. Together with the instructions, we provided 15 examples (1 good example; 14 bad examples). Upon examining the collected data, manual checks of first 100 images containing 500 captions revealed that 9 captions were clearly bad, and 12 captions had minor problems in descriptions. In order to further improve the quality of the corpus, we crowdsourced a new data-cleaning task. We showed each participant an image and five captions that describe the image and asked to fix them. The following is the instructions (English translation) for the task request page for our datacleaning task. 1. There are five sentences about a hyper-linked image, and several sentences require fixes in order to satisfy the conditions below. Please fix the sentences, and while doing so, tick a checkbox of the item (condition) being fixed. 2. The conditions that require fixes are: (a) Please fix typographical errors, omissions and input-method-editor conversion misses. (b) Please remove or rephrase expressions such as ‘oversea’, ‘foreign’ and ‘foreigner.’ (c) Please remove or rephrase expressions such as ‘image’, ‘picture’ and ‘photographed.’ (d) Please fix the description if it does not match the contents of the image. (e) Please remove or rephrase subjective expressions and personal impressions. (f) If the statement is divided into several sentences, please make it one sentence. (g) If the sentence is in a question form, please make it a declarative sentence. (h) Please rewrite the entire sentence if meeting all above conditions requires extensive modifications. (i) If there are less than 16 characters, please provide additional descriptions so that the sentence will be longer than 16 characters. For each condition, we provided a pair of examples (1 bad example and 1 fixed example). To gather participants for the data-cleaning task, we crowdsourced a preliminary user qualification task that explained each condition requiring fixes in the first half, then quizzed the participants in the second half. This time we obtained over 900 qualified participants. We posted the data-cleaning task to these qualified participants. The interface is shown in Figure 2. The instructions in the user interface are very similar to the task request page, except that we have an additional checkbox: (j) All conditions are satisfied and no fixes were necessary. We provided these checkboxes to be used as a checklist, so as to reduce failure by compensating for potential limits of participants’ memory and attention, and to ensure consistency and completeness in carrying out the data-cleaning task. For this data-cleaning task, we had 26,500 images totaling 132,500 captions checked by 267 participants. The number of fixed captions are 1783 Figure 2: Data Cleaning Task User Interface 45,909. To our surprise, a relatively large portion of the captions were fixed by the participants. We suspect that in our data-cleaning task, the condition (e) was especially ambiguous for the participants, and they errored on the cautious side, fixing “a living room” to just “a room”, thinking that a room that looks like a living room may not be a living room for the family who occupies the house, for example. Another example includes fixing “beautiful flowers” to just “flowers” because beauty is in the eye of the beholder and thought to be subjective. The percentage of the ticked checkboxes is as follows: (a) 27.2%, (b) 5.0%, (c) 12.3%, (d) 34.1%, (e) 28.4%, (f) 3.9%, (g) 0.3%, (h) 11.6%, (i) 18.5%, and (j) 24.0%. Note that a checkbox is ticked if there is at least one sentence out of five that meets the condition. In machine learning, this setting is called multiple-instance multiple-label problem (Zhou et al., 2012). We cannot directly infer how many captions correspond to a condition ticked by the participants. After this data-cleaning task, we further removed a few more bad captions that came to our attention. The resulting corpus finally contains 131,740 captions as noted in the previous section. 4 Methodology !"#$! !"#$! %&'( %)( %*( +,,-*.( /01! %)( !"#$! %*( %)( 23! 23! *4*5623.! *4*5623.! &7! 7628*)(2'93:(423:;2:*7! Figure 3: Model Overview 4.1 Model Overview Figure 3 shows an overview of our model. Following the approach of Vinyals et al. (Vinyals et al., 2015), we used a discriminative model that maximizes the probability of the correct description given the image. Our model is formulated as θ∗= arg max θ ∑ (I,S) N ∑ t=0 log p(St|I, S0, ..., St−1; θ), (1) where the first summation is over pairs of an image I and its correct transcription S. For the second summation, the sum is over all words St in S, and N is the length of S. θ represents the model parameters. Note that the second summation represents the probability of the sentence with respect to the joint probability of its words. We modeled p(St|I, S0, ..., St−1; θ) by using a recurrent neural network (RNN). To model the sequences in the RNN, we let a fixed length hidden state or memory ht express the variable number of words to be conditioned up to t −1. The ht 1784 is updated after obtaining a new input xt using a non-linear function f, so that ht+1 = f(ht, xt). Since an LSTM network has state-of-the art performance in sequence modeling such as machine translation, we use one for f, which we explain in the next section. A combination of LSTM and CNN are used to model p(St|I, S0, ..., St−1; θ). x−1 = WimCNN(I) (2) xt = WeSt, t ∈{0...N −1} (3) pt+1 = Softmax(WdLSTM(xt)), t ∈{0...N −1} (4) where Wim is an image feature encoding matrix, We is a word embedding matrix, and Wd is a word decoding matrix. 4.2 LSTM-based Language Model An LSTM is an RNN that addresses the vanishing and exploding gradients problem and that handles longer dependencies well. An LSTM has a memory cell and various gates to control the input, the output, and the memory behaviors. We use an LSTM with input gate it, input modulation gate gt, output gate ot, and forgetting gate ft. The number of hidden units ht is 256. At each time step t, the LSTM state ct, ht is as follows: it = σ(Wixxt + Wihht−1 + bi) (5) ft = σ(Wfxxt + Wfhht−1 + bf) (6) ot = σ(Woxxt + Wohht−1 + bo) (7) gt = ϕ(Wcxxt + Wchht−1 + bc) (8) ct = ft ⊙ct−1 + it ⊙gt (9) ht = ot ⊙ϕ(ct), (10) where σ(x) = (1 + e−x)−1 is a sigmoid function, ϕ(x) = (ex −e−x)/(ex + e−x) is a hyperbolic tangent function, and ⊙denotes the element-wise product of two vectors. W and b are parameters to be learned. From the values of the hidden units ht, the probability distribution of words is calculated as pt+1 = Softmax(Wdht). (11) We use a simple greedy search to generate captions as a sequence of words, and, at each time step t, the predicted word is obtained using St = arg maxS pt. 4.3 Image Feature Extraction with Deep Convolutional Neural Network The image recognition performance of deep convolutional neural network models has rapidly advanced in recent years, and they are now widely used for various image recognition tasks. We used a 16-layer VGGNet (Simonyan and Zisserman, 2014), which was a top performer at the ImageNet Large Scale Visual Recognition Challenge in 2014. A 16-layer VGGNet is composed of 13 convolutional layers having small 3x3 filter kernels and 3 fully connected layers. An image feature is extracted as a 4096-dimensional vector of the VGGNet’s fc7 layer, which is the second fully connected layer from the output layer. VGGNet was pretrained using the ILSVRC2014 subset of the ImageNet dataset, and its weights were not updated through training. 4.4 Dataset Split Because our caption dataset is annotated for only 26,500 images of the MS COCO training set, we reorganized the dataset split for our experiments. Training and validation set images of the MS COCO dataset were mixed and split into four blocks, and these blocks were assigned to training, validation, and testing as shown in Table 1. All blocks were used for the English caption dataset. Blocks B, C, and D were used for the Japanese caption dataset. block no. of images split language A 96,787 train En B 22,500 train En, Ja C 2,000 val En, Ja D 2,000 test En, Ja total 123,287 Table 1: Dataset Split 4.5 Training The models were trained using minibatch stochastic gradient descent, and the gradients were computed by backpropagation through time. Parameter optimization was done using the RMSprop algorithm (Tieleman and Hinton, 2012) with an initial learning rate of 0.001, a decay rate of 0.999, and ϵ of 1.0−8. Each image minibatch contained 100 image features, and the corresponding caption minibatch contained one sampled caption per image. To evaluate the effectiveness of Japanese 1785 image caption generation, we used three learning schemes. Monolingual learning This was the baseline method. The model had only one LSTM for Japanese caption generation, and only the Japanese caption corpus was used for training. Alternate learning In this scheme, a model had two LSTMs, one for English and one for Japanese. The training batches for captions contained either English or Japanese, and the batches were fed into the model alternating between English and Japanese. Transfer learning A model with one LSTM was trained completely for the English dataset. The trained LSTM was then removed, and another LSTM was added for Japanese caption generation. Wim was shared between the English and Japanese training. These models were implemented using the Chainer neural network framework (Tokui et al., 2015). We consulted NeuralTalk (Karpathy, 2014), an open source implemenation of neural network based image caption generation system, for training parameters and dataset preprocessing. Training took about one day using NVIDIA TITAN X/Tesla M40 GPUs. 5 Evaluation 0 10000 20000 30000 40000 50000 No. of iterations 0.1 0.2 0.3 0.4 0.5 0.6 0.7 CIDEr transfer monolingual alternate Figure 4: Learning Curve Represented by CIDEr Score 5.1 Evaluation Metrics We used six standard metrics for evaluating the quality of the generated Japanese sentences: BLEU-1, BLEU-2, BLEU-3, BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), and CIDEr-D (Vedantam et al., 2014). We used the COCO caption evaluation tool (Chen et al., 2015b) to compute the metrics. BLEU (Papineni et al., 2002) was originally designed for automatic machine translation. By counting n-gram co-occurrences, it rates the quality of a translated sentence given several reference sentences. To apply BLEU, we considered that generating image captions is the same as translating images into sentences. ROUGE (Lin, 2004) is an evaluation metric designed by adapting BLEU to evaluate automatic text summarization algorithms. ROUGE is based on the longest common subsequences instead of n-grams. CIDEr (Vedantam et al., 2014) is a metric developed specifically for evaluating image captions. It measures consensus in image captions by performing a term-frequency inverse document frequency (TF-IDF) weighting for each n-gram. We used a robust variant of CIDEr called CIDEr-D. For all evaluation metrics, higher scores are better. In addition to these metrics, MS COCO caption evaluation (Chen et al., 2015b) uses METEOR (Lavie, 2014), another metric for evaluating automatic machine translation. Although METEOR is a good metric, it uses an English thesaurus. It was not used in our study due to the lack of a thesaurus for the Japanese language. The CIDEr and METEOR metrics perform well in terms of correlation with human judgment (Bernardi et al., 2016). Although BLEU is unable to sufficiently discriminate between judgments, we report the BLEU figures as well since their use in literature is widespread. In the next section, we focus our analysis on CIDEr. 0 5000 10000 15000 20000 25000 No. of images (Ja) 0.40 0.45 0.50 0.55 0.60 0.65 CIDEr transfer monolingual Figure 5: CIDEr Score vs. Japanese Data Set Size 5.2 Results Table 2 shows the evaluation metrics for various settings of cross-lingual transfer learning. All values were calculated for Japanese captions gener1786 no. of images metrics En Ja BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-L CIDEr-D monolingual 0 22,500 0.715 0.573 0.468 0.379 0.616 0.580 alternate 119,287 22,500 0.709 0.565 0.460 0.370 0.611 0.568 transfer 119,287 22,500 0.717 0.574 0.469 0.380 0.619 0.625 Table 2: Evaluation Metrics ated for test set images. Our proposed model is labeled “transfer.” As you can see, it outperformed the other two models for every metric. In particular, the CIDEr-D score was about 4% higher than that for the monolingual baseline. The performance of a model trained using the English and Japanese corpora alternately is shown on the line label “alternate.” Surprisingly, this model had lower performance than the baseline model. In Figure 4, we plot the learning curves represented by the CIDEr score for the Japanese captions generated for the validation set images. Transfer learning from English to Japanese converged faster than learning from the Japanese dataset or learning by training from both languages alternately. Figure 5 shows the relationship between the CIDEr score and the Japanese dataset size (number of images). The models pretrained using English captions (blue line) outperformed the ones trained using only Japanese captions for all training dataset sizes. As can be seen by comparing the case of 4,000 images with that of 20,000 images, the improvement due to cross-lingual transfer was larger when the Japanese dataset was smaller. These results show that pretraining the model with all available English captions is roughly equivalent to training the model with captions for 10,000 additional images in Japanese. This, in our case, nearly halves the cost of building the corpus. Examples of machine-generated captions along with the crowd-written ground truth captions (English translations) are shown in Figure 6. 6 Discussion Despite our initial belief, training by alternating English and Japanese input batch data for learning both languages did not work well for either language. As Japanese is a morphologically rich language and word ordering is subject-object-verb, it is one of most distant languages from English. We suspect that the alternating batch training interfered with learning the syntax of either language. Moreover, when we tried character-based models for both languages, the performance was significantly lower. This was not surprising because one word in English is roughly two characters in Japanese, and presumably differences in the language unit should affect performance. Perhaps not surprisingly, cross-lingual transfer was more effective when the resources in the target language are poor. Convergence was faster with the same amount of data in the target language when pretraining in the source language was done ahead of time. These two findings ease the burden of developing a large corpus in a resource poor language. 7 Conclusion We have created an image caption dataset for the Japanese language by collecting 131,740 captions for 26,500 images using the Yahoo! Crowdsourcing service in Japan. We showed that pretraining a neural image caption model with the English portion of the corpus improves the performance of a Japanese caption generation model subsequently trained using Japanese data. Pretraining the model using the English captions of 119,287 images was roughly equivalent to training the model using the captions of 10,000 additional images in Japanese. This, in our case, nearly halves the cost of building a corpus. Since this performance gain is obtained without modifying the original monolingual image caption generator, the proposed model can serve as a strong baseline for future research in this area. We hope that our dataset and proposed method kick start studies on cross-lingual image caption generation and that many others follow our lead. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representation (ICLR). Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 1787 Figure 6: Image Caption Generation Examples 1788 2016. Automatic description generation from images: A survey of models, datasets, and evaluation measures. arXiv preprint arXiv:1601.03896. Qiang Chen, Wenjie Li, Yu Lei, Xule Liu, and Yanxiang He. 2015a. Learning to adapt credible knowledge in cross-lingual sentiment analysis. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 419– 429, Beijing, China, July. Association for Computational Linguistics. Xinlei Chen, Tsung-Yi Lin Hao Fang, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollr, and C. Lawrence Zitnick. 2015b. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar, October. Association for Computational Linguistics. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. Decaf: A deep convolutional activation feature for generic visual recognition. In International Conference in Machine Learning (ICML). Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: Generating sentences from images. In Proceedings of the 11th European Conference on Computer Vision: Part IV, ECCV’10, pages 15–29, Berlin, Heidelberg. Springer-Verlag. Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, and Margaret Mitchell. 2015. A survey of current datasets for vision and language research. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 207–213, Lisbon, Portugal, September. Association for Computational Linguistics. Ruka Funaki and Hideki Nakayama. 2015. Imagemediated learning for zero-shot cross-lingual document retrieval. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 585–590, Lisbon, Portugal, September. Association for Computational Linguistics. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1234–1244, Beijing, China, July. Association for Computational Linguistics. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709, Seattle, Washington, USA, October. Association for Computational Linguistics. Andrej Karpathy. 2014. Neuraltalk. https:// github.com/karpathy/neuraltalk. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc. Michael Denkowski Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. ACL 2014, page 376. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick, 2014. Computer Vision – ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, chapter Microsoft COCO: Common Objects in Context, pages 740–755. Springer International Publishing, Cham. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text summarization branches out: Proceedings of the ACL-04 workshop, 8. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting image annotations using amazon’s mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, CSLDAMT ’10, pages 139–147, Stroudsburg, PA, USA. Association for Computational Linguistics. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252. 1789 Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. Cnn features offthe-shelf: An astounding baseline for recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June. K. Simonyan and A. Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. T. Tieleman and G. Hinton. 2012. Lecture 6.5— RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning. Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS). Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2014. Cider: Consensus-based image description evaluation. arXiv preprint arXiv:1411.5726. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Zhi-Hua Zhou, Min-Ling Zhang, Sheng-Jun Huang, and Yu-Feng Li. 2012. Multi-instance multi-label learning. Artificial Intelligence, 176(1):2291–2320. Ayah Zirikly and Masato Hagiwara. 2015. Crosslingual transfer of named entity recognizers without parallel corpora. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 390–396, Beijing, China, July. Association for Computational Linguistics. C.L. Zitnick, D. Parikh, and L. Vanderwende. 2013. Learning the visual interpretation of sentences. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 1681–1688, Dec. 1790
2016
168
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1791–1801, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning Concept Taxonomies from Multi-modal Data Hao Zhang1, Zhiting Hu1, Yuntian Deng1, Mrinmaya Sachan1, Zhicheng Yan2, Eric P. Xing1 1Carnegie Mellon University, 2UIUC {hao,zhitingh,yuntiand,mrinmays,epxing}@cs.cmu.edu Abstract We study the problem of automatically building hypernym taxonomies from textual and visual data. Previous works in taxonomy induction generally ignore the increasingly prominent visual data, which encode important perceptual semantics. Instead, we propose a probabilistic model for taxonomy induction by jointly leveraging text and images. To avoid hand-crafted feature engineering, we design end-to-end features based on distributed representations of images and words. The model is discriminatively trained given a small set of existing ontologies and is capable of building full taxonomies from scratch for a collection of unseen conceptual label items with associated images. We evaluate our model and features on the WordNet hierarchies, where our system outperforms previous approaches by a large gap. 1 Introduction Human knowledge is naturally organized as semantic hierarchies. For example, in WordNet (Miller, 1995), specific concepts are categorized and assigned to more general ones, leading to a semantic hierarchical structure (a.k.a taxonomy). A variety of NLP tasks, such as question answering (Harabagiu et al., 2003), document clustering (Hotho et al., 2002) and text generation (Biran and McKeown, 2013) can benefit from the conceptual relationship present in these hierarchies. Traditional methods of manually constructing taxonomies by experts (e.g. WordNet) and interest communities (e.g. Wikipedia) are either knowledge or time intensive, and the results have limited coverage. Therefore, automatic induction of taxonomies is drawing increasing attention in both (a) Input Seafish Shark Ray Seafish Ray Shark “seafish, such as sharks and rays…” “shark and ray are a group of seafish…” “either ray or shark lives in …” (b) Output visual similarity wordvec closeness Figure 1: An overview of our system. (a) Input: a collection of label items, represented by text and images; (b) Output: we build a taxonomy from scratch by extracting features based on distributed representations of text and images. NLP and computer vision. On one hand, a number of methods have been developed to build hierarchies based on lexical patterns in text (Yang and Callan, 2009; Snow et al., 2006; Kozareva and Hovy, 2010; Navigli et al., 2011; Fu et al., 2014; Bansal et al., 2014; Tuan et al., 2015). These works generally ignore the rich visual data which encode important perceptual semantics (Bruni et al., 2014) and have proven to be complementary to linguistic information and helpful for many tasks (Silberer and Lapata, 2014; Kiela and Bottou, 2014; Zhang et al., 2015; Chen et al., 2013). On the other hand, researchers have built visual hierarchies by utilizing only visual features (Griffin and Perona, 2008; Yan et al., 2015; Sivic et al., 2008). The resulting hierarchies are limited in interpretability and usability for knowledge transfer. Hence, we propose to combine both visual and textual knowledge to automatically build taxonomies. We induce is-a taxonomies by supervised learning from existing entity ontologies where each concept category (entity) is associated with images, either from existing dataset (e.g. ImageNet (Deng et al., 2009)) or retrieved from the web using search engines, as illustrated in Fig 1. Such a scenario is realistic and can be extended to a variety of tasks; for example, in knowledge base 1791 construction (Chen et al., 2013), text and image collections are readily available but label relations among categories are to be uncovered. In largescale object recognition, automatically learning relations between labels can be quite useful (Deng et al., 2014; Zhao et al., 2011). Both textual and visual information provide important cues for taxonomy induction. Fig 1 illustrates this via an example. The parent category seafish and its two child categories shark and ray are closely related as: (1) there is a hypernym-hyponym (is-a) relation between the words “seafish” and “shark”/“ray” through text descriptions like “...seafish, such as shark and ray...”, “...shark and ray are a group of seafish...”; (2) images of the close neighbors, e.g., shark and ray are usually visually similar and images of the child, e.g. shark/ray are similar to a subset of images of seafish. To effectively capture these patterns, in contrast to previous works that rely on various hand-crafted features (Chen et al., 2013; Bansal et al., 2014), we extract features by leveraging the distributed representations that embed images (Simonyan and Zisserman, 2014) and words (Mikolov et al., 2013) as compact vectors, based on which the semantic closeness is directly measured in vector space. Further, we develop a probabilistic framework that integrates the rich multi-modal features to induce “is-a” relations between categories, encouraging local semantic consistency that each category should be visually and textually close to its parent and siblings. In summary, this paper has the following contributions: (1) We propose a novel probabilistic Bayesian model (Section 3) for taxonomy induction by jointly leveraging textual and visual data. The model is discriminatively trained and can be directly applied to build a taxonomy from scratch for a collection of semantic labels. (2) We design novel features (Section 4) based on generalpurpose distributed representations of text and images to capture both textual and visual relations between labels. (3) We evaluate our model and features on the ImageNet hierarchies with two different taxonomy induction tasks (Section 5). We achieve superior performance on both tasks and improve the F1 score by 2x in the taxonomy construction task, compared to previous approaches. Extensive comparisons demonstrate the effectiveness of integrating visual features with language features for taxonomy induction. We also provide qualitative analysis on our features, the learned model, and the taxonomies induced to provide further insights (Section 5.3). 2 Related Work Many approaches have been recently developed that build hierarchies purely by identifying either lexical patterns or statistical features in text corpora (Yang and Callan, 2009; Snow et al., 2006; Kozareva and Hovy, 2010; Navigli et al., 2011; Zhu et al., 2013; Fu et al., 2014; Bansal et al., 2014; Tuan et al., 2014; Tuan et al., 2015; Kiela et al., 2015). The approaches in Yang and Callan (2009) and Snow et al. (2006) assume a starting incomplete hierarchy and try to extend it by inserting new terms. Kozareva and Hovy (2010) and Navigli et al. (2011) first find leaf nodes and then use lexical patterns to find intermediate terms and all the attested hypernymy links between them. In (Tuan et al., 2014), syntactic contextual similarity is exploited to construct the taxonomy, while Tuan et al. (2015) go one step further to consider trustiness and collective synonym/contrastive evidence. Different from them, our model is discriminatively trained with multi-modal data. The works of Fu et al. (2014) and Bansal et al. (2014) use similar language-based features as ours. Specifically, in (Fu et al., 2014), linguistic regularities between pretrained word vectors (Mikolov et al., 2013) are modeled as projection mappings. The trained projection matrix is then used to induce pairwise hypernym-hyponym relations between words. Our features are partially motivated by Fu et al. (2014), but we jointly leverage both textual and visual information. In Kiela et al. (2015), both textual and visual evidences are exploited to detect pairwise lexical entailments. Our work is significantly different as our model is optimized over the whole taxonomy space rather than considering only word pairs separately. In (Bansal et al., 2014), a structural learning model is developed to induce a globally optimal hierarchy. Compared with this work, we exploit much richer features from both text and images, and leverage distributed representations instead of hand-crafted features. Several approaches (Griffin and Perona, 2008; Bart et al., 2008; Marszałek and Schmid, 2008) have also been proposed to construct visual hierarchies from image collections. In (Bart et al., 2008), a nonparametric Bayesian model is developed to group images based on low-level features. 1792 In (Griffin and Perona, 2008) and (Marszałek and Schmid, 2008), a visual taxonomy is built to accelerate image categorization. In (Chen et al., 2013), only binary object-object relations are extracted using co-detection matrices. Our work differs from all of these as we integrate textual with visual information to construct taxonomies. Also of note are several works that integrate text and images as evidence for knowledge base autocompletion (Bordes et al., 2011) and zeroshot recognition (Gan et al., 2015; Gan et al., ; Socher et al., 2013). Our work is different because our task is to accurately construct multilevel hyponym-hypernym hierarchies from a set of (seen or unseen) categories. 3 Taxonomy Induction Model Our model is motivated by the key observation that in a semantically meaningful taxonomy, a category tends to be closely related to its children as well as its siblings. For instance, there exists a hypernym-hyponym relation between the name of category shark and that of its parent seafish. Besides, images of shark tend to be visually similar to those of ray, both of which are seafishes. Our model is thus designed to encourage such local semantic consistency; and by jointly considering all categories in the inference, a globally optimal structure is achieved. A key advantage of the model is that we incorporate both visual and textual features induced from distributed representations of images and text (Section 4). These features capture the rich underlying semantics and facilitate taxonomy induction. We further distinguish the relative importance of visual and textual features that could vary in different layers of a taxonomy. Intuitively, visual features would be increasingly indicative in the deeper layers, as sub-categories under the same category of specific objects tend to be visually similar. In contrast, textual features would be more important when inducing hierarchical relations between the categories of general concepts (i.e. in the near-root layers) where visual characteristics are not necessarily similar. 3.1 The Problem Assume a set of N categories x = {x1, x2, . . . , xN}, where each category xn consists of a text term tn as its name, as well as a set of images in = {i1, i2, . . . }. Our goal is to construct a taxonomy tree T over these categories1, such that categories of specific object types (e.g. shark) are grouped and assigned to those of general concepts (e.g. seafish). As the categories in x may be from multiple disjoint taxonomy trees, we add a pseudo category x0 as the hyper-root so that the optimal taxonomy is ensured to be a single tree. Let zn ∈{1, . . . , N} be the index of the parent of category xn, i.e. xzn is the hypernymic category of xn. Thus the problem of inducing a taxonomy structure is equivalent to inferring the conditional distribution p(z|x) over the set of (latent) indices z = {z1, . . . , zn}, based on the images and text. 3.2 Model We formulate the distribution p(z|x) through a model which leverages rich multi-modal features. Specifically, let cn be the set of child nodes of category xn in a taxonomy encoded by z. Our model is defined as pw(z, π|x, α) ∝p(π|α) N Y n=1 Y xn′ ∈cn πngw(xn, xn′, cn\xn′) (1) where gw(xn, xn′, cn\xn′), defined as gw(xn, xn′, cn\xn′) = exp{w⊤ d(xn′ )f n,n′,cn\xn′ }, measures the semantic consistency between category xn′, its parent xn as well as its siblings indexed by cn\xn′. The function gw(·) is loglinear with respect to f n,n′,cn\xn′, which is the feature vector defined over the set of relevant categories (xn, xn′, cn\xn′), with cn\xn′ being the set of child categories excluding xn′ (Section 4). The simple exponential formulation can effectively encourage close relations among nearby categories in the induced taxonomy. The function has combination weights w = {w1, . . . , wL}, where L is the maximum depth of the taxonomy, to capture the importance of different features, and the function d(xn′) to return the depth of xn′ in the current taxonomy. Each layer l (1 ≤l ≤L) of the taxonomy has a specific wl thereby allowing varying weights of the same features in different layers. The parameters are learned in a supervised manner. In eq 1, we also introduce a weight πn for each node xn, in order to capture the varying popularity of different categories (in terms of being a parent category). For example, some categories like 1We assume T to be a tree. Most existing taxonomies are modeled as trees (Bansal et al., 2014), since a tree helps simplify the construction and ensures that the learned taxonomy is interpretable. With minor modifications, our model also works on non-tree structures. 1793 plant can have a large number of sub-categories, while others such as stone have less. We model π as a multinomial distribution with Dirichlet prior α = (α1, . . . , αN) to encode any prior knowledge of the category popularity2; and the conjugacy allows us to marginalize out π analytically to get pw(z|x, α) ∝ Z p(π|α) N Y n=1 Y xn′ ∈cn πngw(xn, xn′, cn\xn′)dπ ∝ Y n Γ(qn + αn) Y xn′ ∈cn gw(xn, xn′, cn\xn′) (2) where qn is the number of children of category xn. Next, we describe our approach to infer the expectation for each zn, and based on that select a particular taxonomy structure for the category nodes x. As z is constrained to be a tree (i.e. cycle without loops), we include with eq 2, an indicator factor 1(z) that takes 1 if z corresponds a tree and 0 otherwise. We modify the inference algorithm appropriately to incorporate this constraint. Inference. Exact inference is computationally intractable due to the normalization constant of eq 2. We therefore use Gibbs Sampling, a procedure for approximate inference. Here we present the sampling formula for each zn directly, and defer the details to the supplementary material. The sampling procedure is highly efficient because the normalization term and the factors that are irrelevant to zn are cancelled out. The formula is p(zn =m|z\zn, ·) ∝1(zn = m, z\zn) · q−n m + αm  · Q xn′ ∈cm∪{xn} gw(xm, xn′, cm ∪{xn}) Q xn′ ∈cm\xn gw(xm, xn′, cm\xn) , (3) where qm is the number of children of category m; the superscript −n denotes the number excluding xn. Examining the validity of the taxonomy structure (i.e. the tree indicator) in each sampling step can be computationally prohibitive. To handle this, we restrict the candidate value of zn in eq 3, ensuring that the new zn is always a tree. Specifically, given a tree T, we define a structure operation as the procedure of detaching one node xn in T from its parent and appending it to another node xm which is not a descendant of xn. Proposition 1. (1) Applying a structure operation on a tree T will result in a structure that is still a tree. (2) Any tree structure over the node set x that has the same root node with tree T can be achieved by applying structure operation on T a finite number of times. 2α could be estimated using training data. The proof is straightforward and we omit it due to space limitations. We also add a pseudo node x0 as the fixed root of the taxonomy. Hence by initializing a tree-structured state rooted at x0 and restricting each updating step as a structure operation, our sampling procedure is able to explore the whole valid tree space. Output taxonomy selection. To apply the model to discover the underlying taxonomy from a given set of categories, we first obtain the marginals of z by averaging over the samples generated through eq 3, then output the optimal taxonomy z∗by finding the maximum spanning tree (MST) using the Chu-Liu-Edmonds algorithm (Chu and Liu, 1965; Bansal et al., 2014). Training. We need to learn the model parameters wl of each layer l, which capture the relative importance of different features. The model is trained using the EM algorithm. Let ℓ(xn) be the depth (layer) of category xn; and ˜z (siblings ˜cn) denote the gold structure in training data. Our training algorithm updates w through maximum likelihood estimation, wherein the gradient of wl is (see the supplementary materials for details): δwl = X n:ℓ(xn)=l {f(x˜zn, xn, ˜cn\xn)−Ep[f(xzn, xn, cn\xn)]} , which is the net difference between gold feature vectors and expected feature vectors as per the model. The expectation is approximated by collecting samples using the sampler described above and averaging them. 4 Features In this section, we describe the feature vector f used in our model, and defer more details in the supplementary material. Compared to previous taxonomy induction works which rely purely on linguistic information, we exploit both perceptual and textual features to capture the rich spectrum of semantics encoded in images and text. Moreover, we leverage the distributed representations of images and words to construct compact and effective features. Specifically, each image i is represented as an embedding vector vi ∈Ra extracted by deep convolutional neural networks. Such image representation has been successfully applied in various vision tasks. On the other hand, the category name t is represented by its word embedding vt ∈Rb, a low-dimensional dense vector induced by the Skip-gram model (Mikolov et 1794 al., 2013) which is widely used in diverse NLP applications too. Then we design f(xn, xn′, cn\xn′) based on the above image and text representations. The feature vector f is used to measure the local semantic consistency between category xn′ and its parent category xn as well as its siblings cn\xn′. 4.1 Image Features Sibling similarity. As mentioned above, close neighbors in a taxonomy tend to be visually similar, indicating that the embedding of images of sibling categories should be close to each other in the vector space Ra. For a category xn and its image set in, we fit a Gaussian distribution N(vin, Σn) to the image vectors, where vin ∈Ra is the mean vector and Σn ∈Ra×a is the covariance matrix. For a sibling category xm of xn, we define the visual similarity between xn and xm as vissim(xn, xm)=[N(vim; vin, Σn)+N(vin; vim, Σm)]/2 which is the average probability of the mean image vector of one category under the Gaussian distribution of the other. This takes into account not only the distance between the mean images, but also the closeness of the images of each category. Accordingly, we compute the visual similarity between xn′ and the set cn\xn′ by averaging: vissim(xn′, cn\xn′) = P xm∈cn\xn′ vissim(xn′, xm) |cn| −1 . We then bin the values of vissim(xn′, cn\xn′) and represent it as an one-hot vector, which constitutes f as a component named as siblings imageimage relation feature (denoted as S-V13). Parent prediction. Similar to feature S-V1, we also create the similarity feature between the image vectors of the parent and child, to measure their visual similarity. However, the parent node is usually a more general concept than the child, and it usually consists of images that are not necessarily similar to its child. Intuitively, by narrowing the set of images to those that are most similar to its child improves the feature. Therefore, different from S-V1, when estimating the Gaussian distribution of the parent node, we only use the top K images with highest probabilities under the Gaussian distribution of the child node. We empirically show in section 5.3 that choosing an appropriate K consistently boosts the performance. We name this feature as parent-child image-image relation feature (denoted as PC-V1). 3S: sibling, PC: parent-child, V: visual, T: textual. Further, inspired by the linguistic regularities of word embedding, i.e. the hypernym-hyponym relationship between words can be approximated by a linear projection operator between word vectors (Mikolov et al., 2013; Fu et al., 2014), we design a similar strategy to (Fu et al., 2014) between images and words so that the parent can be “predicted” given the image embedding of its child category and the projection matrix. Specifically, let (xn, xn′) be a parent-child pair in the training data, we learn a projection matrix Φ which minimizes the distance between Φvin′ (i.e. the projected mean image vector vin′ of the child) and vtn (i.e. the word embedding of the parent): Φ∗= argmin Φ 1 N X n ∥Φvin′ −vtn∥2 2 + λ∥Φ∥1, where N is the number of parent-child pairs in the training data. Once the projection matrix has been learned, the similarity between a child node xn′ and its parent xn is computed as ∥Φvin′ −vtn∥, and we also create an one-hot vector by binning the feature value. We call this feature as parentchild image-word relation feature (PC-V2). 4.2 Word Features We briefly introduce the text features employed. More details about the text feature extraction could be found in the supplementary material. Word embedding features.d PC-V1, We induce features using word vectors to measure both sibling-sibling and parent-child closeness in text domain (Fu et al., 2014). One exception is that, as each category has only one word, the sibling similarity is computed as the cosine distance between two word vectors (instead of mean vectors). This will produce another two parts of features, parentchild word-word relation feature (PC-T1) and siblings word-word relation feature (S-T1). Word surface features. In addition to the embedding-based features, we further leverage lexical features based on the surface forms of child/parent category names. Specifically, we employ the Capitalization, Ends with, Contains, Suffix match, LCS and Length different features, which are commonly used in previous works in taxonomy induction (Yang and Callan, 2009; Bansal et al., 2014). 5 Experiments We first disclose our implementation details in section 5.1 and the supplementary material for bet1795 ter reproducibility. We then compare our model with previous state-of-the-art methods (Fu et al., 2014; Bansal et al., 2014) with two taxonomy induction tasks. Finally, we provide analysis on the weights and taxonomies induced. 5.1 Implementation Details Dataset. We conduct our experiments on the ImageNet2011 dataset (Deng et al., 2009), which provides a large collection of category items (synsets), with associated images and a label hierarchy (sampled from WordNet) over them. The original ImageNet taxonomy is preprocessed, resulting in a tree structure with 28231 nodes. Word embedding training. We train word embedding for synsets by replacing each word/phrase in a synset with a unique token and then using Google’s word2vec tool (Mikolov et al., 2013). We combine three public available corpora together, including the latest Wikipedia dump (Wikipedia, 2014), the One Billion Word Language Modeling Benchmark (Chelba et al., 2013) and the UMBC webbase corpus (Han et al., 2013), resulting in a corpus with total 6 billion tokens. The dimension of the embedding is set to 200. Image processing. we employ the ILSVRC12 pre-trained convolutional neural networks (Simonyan and Zisserman, 2014) to embed each image into the vector space. Then, for each category xn with images, we estimate a multivariate Gaussian parameterized by Nxn = (µxn, Σxn), and constrain Σxn to be diagonal to prevent overfitting. For categories with very few images, we only estimate a mean vector µxn. For nodes that do not have images, we ignore the visual feature. Training configuration. The feature vector is a concatenation of 6 parts, as detailed in section 4. All pairwise distances are precomputed and stored in memory to accelerate Gibbs sampling. The initial learning rate for gradient descent in the M step is set to 0.1, and is decreased by a fraction of 10 every 100 EM iterations. 5.2 Evaluation 5.2.1 Experimental Settings We evaluate our model on three subtrees sampled from the ImageNet taxonomy. To collect the subtrees, we start from a given root (e.g. consumer goods) and traverse the full taxonomy using BFS, and collect all descendant nodes within a depth h (number of nodes in the longest path). We vary h Trees Tree A Tree B Tree C Synset ID 12638 19919 23733 Name consumer goods animal food, nutrient h = 4 187 207 572 h = 5 362 415 890 h = 6 493 800 1166 h = 7 524 1386 1326 Table 1: Statistics of our evaluation set. The bottom 4 rows give the number of nodes within each height h ∈{4, 5, 6, 7}. The scale of the threes range from small to large, and there is no overlapping among them. to get a series of subtrees with increasing heights h ∈{4, 5, 6, 7} and various scales (maximally 1326 nodes) in different domains. The statistics of the evaluation sets are provided in Table 1. To avoid ambiguity, all nodes used in ILSVRC 2012 are removed as the CNN feature extractor is trained on them. We design two different tasks to evaluate our model. (1) In the hierarchy completion task, we randomly remove some nodes from a tree and use the remaining hierarchy for training. In the test phase, we infer the parent of each removed node and compare it with groundtruth. This task is designed to figure out whether our model can successfully induce hierarchical relations after learning from within-domain parent-child pairs. (2) Different from the previous one, the hierarchy construction task is designed to test the generalization ability of our model, i.e. whether our model can learn statistical patterns from one hierarchy and transfer the knowledge to build a taxonomy for another collection of out-of-domain labels. Specifically, we select two trees as the training set to learn w. In the test phase, the model is required to build the full taxonomy from scratch for the third tree. We use Ancestor F1 as our evaluation metric (Kozareva and Hovy, 2010; Navigli et al., 2011; Bansal et al., 2014). Specifically, we measure F1 = 2PR/(P +R) values of predicted “is-a” relations where the precision (P) and recall (R) are: P = |isapredicted ∩isagold| |isapredicted| , R = |isapredicted ∩isagold| |isagold| . We compare our method to two previously state-of-the-art models by Fu et al. (2014) and Bansal et al. (2014), which are closest to ours. 1796 Method h = 4 h = 5 h = 6 h = 7 Hierarchy Completion Fu2014 0.66 0.42 0.26 0.21 Ours (L) 0.70 0.49 0.45 0.37 Ours (LV) 0.73 0.51 0.50 0.42 Hierarchy Construction Fu2014 0.53 0.33 0.28 0.18 Bansal2014 0.67 0.53 0.43 0.37 Ours (L) 0.58 0.41 0.36 0.30 Ours (LB) 0.68 0.55 0.45 0.40 Ours (LV) 0.66 0.52 0.42 0.34 Ours (LVB - E) 0.68 0.55 0.44 0.39 Ours (LVB) 0.70 0.57 0.49 0.43 Table 2: Comparisons among different variants of our model, Fu et al. (2014) and Bansal et al. (2014) on two tasks. The ancestor-F1 scores are reported. 5.2.2 Results Hierarchy completion. In the hierarchy completion task, we split each tree into 70% nodes for training and 30% for test, and experiment with different h. We compare the following three systems: (1) Fu20144 (Fu et al., 2014); (2) Ours (L): Our model with only language features enabled (i.e. surface features, parent-child word-word relation feature and siblings word-word relation feature); (3) Ours (LV): Our model with both language features and visual features 5. The average performance on three trees are reported at Table 2. We observe that the performance gradually drops when h increases, as more nodes are inserted when the tree grows higher, leading to a more complex and difficult taxonomy to be accurately constructed. Overall, our model outperforms Fu2014 in terms of the F1 score, even without visual features. In the most difficult case with h = 7, our model still holds an F1 score of 0.42 (2× of Fu2014), demonstrating the superiority of our model. Hierarchy construction. The hierarchy construction task is much more difficult than hierarchy completion task because we need to build a taxonomy from scratch given only a hyper-root. For this task, we use a leave-one-out strategy, i.e. we train our model on every two trees and test on the third, and report the average performance in Table 2. We compare the following methods: (1) Fu2014, (2) Ours (L), and (3) Ours (LV), as described above; (4) Bansal2014: The model by Bansal et al. (2014) 4We tried different parameter settings for the number of clusters C and the identification threshold δ, and reported the best performance we achieved. 5In the comparisons to (Fu et al., 2014) and (Bansal et al., 2014), we simply set K = ∞, i.e. we use all available images of the parent category to estimate the PC-V1 feature. retrained using our dataset; (5) Ours (LB): By excluding visual features, but including other language features from Bansal et al. (2014); (6) Ours (LVB): Our full model further enhanced with all semantic features from Bansal et al. (2014); (7) Ours (LVB - E): By excluding word embeddingbased language features from Ours (LVB). As shown, on the hierarchy construction task, our model with only language features still outperforms Fu2014 with a large gap (0.30 compared to 0.18 when h = 7), which uses similar embeddingbased features. The potential reasons are two-fold. First, we take into account not only parent-child relations but also siblings. Second, their method is designed to induce only pairwise relations. To build the full taxonomy, they first identify all possible pairwise relations using a simple thresholding strategy and then eliminate conflicted relations to obtain a legitimate tree hierarchy. In contrast, our model is optimized over the full space of all legitimate taxonomies by taking the structure operation in account during Gibbs sampling. When comparing to Bansal2014, our model with only word embedding-based features underperforms theirs. However, when introducing visual features, our performance is comparable (pvalue = 0.058).Furthermore, if we discard visual features but add semantic features from Bansal et al. (2014), we achieve a slight improvement of 0.02 over Bansal2014 (p-value = 0.016), which is largely attributed to the incorporation of word embedding-based features that encode high-level linguistic regularity. Finally, if we enhance our full model with all semantic features from Bansal et al. (2014), our model outperforms theirs by a gap of 0.04 (p-value < 0.01), which justifies our intuition that perceptual semantics underneath visual contents are quite helpful. 5.3 Qualitative Analysis In this section, we conduct qualitative studies to investigate how and when the visual information helps the taxonomy induction task. Contributions of visual features. To evaluate the contribution of each part of the visual features to the final performance, we train our model jointly with textual features and different combinations of visual features, and report the ancestorF1 scores. As shown in Table 3. When incorporating the feature S-V1, the performance is substantially boosted by a large gap at all heights, show1797 S-V1 PC-V1 PC-V2 h = 4 h = 5 h = 6 h = 7 0.58 0.41 0.36 0.30 ✓ 0.63 0.48 0.40 0.32 ✓ 0.61 0.44 0.38 0.31 ✓ 0.60 0.42 0.37 0.31 ✓ ✓ 0.65 0.52 0.41 0.33 ✓ ✓ ✓ 0.66 0.52 0.42 0.34 Table 3: The performance when different combinations of visual features are enabled. ing that visual similarity between sibling nodes is a strong evidence for taxonomy induction. It is intuitively plausible, as it is highly likely that two specific categories share a common (and more general) parent category if similar visual contents are observed between them. Further, adding the PC-V1 feature gains us a better improvement than adding PC-V2, but both minor than S-V1. Compared to that of siblings, the visual similarity between parents and children does not strongly holds all the time. For example, images of Terrestrial animal are only partially similar to those of Feline, because the former one contains the later one as a subset. Our feature captures this type of “contain” relation between parents and children by considering only the top-K images from the parent category that have highest probabilities under the Gaussian distribution of the child category. To see this, we vary K while keep all other settings, and plot the F1 scores in Fig 2. We observe a trend that when we gradually increase K, the performance goes up until reaching some maximal; It then slightly drops (or oscillates) even when more images are available, which confirms with our feature design that only top images should be considered in parent-child visual similarity. Overall, the three visual features complement each other, and achieve the highest performance when combined. Visual representations. To investigate how the image representations affect the final performance, we compare the ancestor-F1 score when different pre-trained CNNs are used for visual feature extraction. Specifically, we employ both the CNN-128 model (128 dimensional feature with 15.6% top-5 error on ILSVRC12) and the VGG16 model (4096 dimensional feature with 7.5% top-5 error) by Simonyan and Zisserman (2014), but only observe a slight improvement of 0.01 on the ancestor-F1 score for the later one. Relevance of textual and visual features v.s. depth of tree. Compared to Bansal et al. (2014), h = 4 h = 5 h = 6 h = 7 Ancester-F1 K /100 Figure 2: The Ancestor-F1 scores changes over K (number of images used in the PC-V1 feature) at different heights. The values in the x-axis are K/100; K = ∞means all images are used. Figure 3: Normalized weights of each feature v.s. the layer depth. a major difference of our model is that different layers of the taxonomy correspond to different weights wl, while in (Bansal et al., 2014) all layers share the same weights. Intuitively, introducing layer-wise w not only extends the model capacity, but also differentiates the importance of each feature at different layers. For example, the images of two specific categories, such as shark and ray, are very likely to be visually similar. However, when the taxonomy goes from bottom to up (specific to general), the visual similarity is gradually undermined — images of fish and terrestrial animal are not necessarily similar any more. Hence, it is necessary to privatize the weights w for different layers to capture such variations, i.e. the visual features become more and more evident from shallow to deep layers, while the textual counterparts, which capture more abstract concepts, relatively grow more indicative oppositely from specific to general. To visualize the variations across layers, for each feature component, we fetch its correspond1798 millipede invertebrate critter animal caterpillar domestic animal starfish chordate arrowworm arthropod nematode trichina planarian polyp echinoderm annelid worm tussock caterpillar tent caterpillar cephalochordate scavenger larvacean sagitta stocker lancelet archiannelid larva foodstuff meal, repast food, nutrient nutriment liquid diet dietary ingredient flour grain beef stew cows’ milk juice water boiled egg barley spring water stew fish stew diary product wheat soybean meal wheat flour hard-boiled egg brunch breakfast water drinking water juice Figure 4: Excerpts of the prediction taxonomies, compared to the groundturth. Edges marked as red and green are false predictions and unpredicted groundtruth links, respectively. ing block in w as V . Then, we average |V | and observe how its values change with the layer depth h. For example, for the parent-child word-word relation feature, we first fetch its corresponding weights V from w as a 20 × 6 matrix, where 20 is the feature dimension and 6 is the number of layers. We then average its absolute values6 in column and get a vector v with length 6. After ℓ2 normalization, the magnitude of each entry in v directly reflects the relative importance of the feature as an evidence for taxonomy induction. Fig 3(b) plots how their magnitudes change with h for every feature component averaged on three train/test splits. It is noticeable that for both wordword relations (S-T1, PC-T1), their corresponding weights slightly decrease as h increases. On the contrary, the image-image relation features (S-V1, PC-V1) grows relatively more prominent. The results verify our conjecture that when the category hierarchy goes deeper into more specific classes, the visual similarity becomes relatively more indicative as an evidence for taxonomy induction. Visualizing results. Finally, we visualize some excerpts of our predicted taxonomies, as compared to the groundtruth in Fig 4. 6We take the absolute value because we only care about the relevance of the feature as an evidence for taxonomy induction, but note that the weight can either encourage (positive) or discourage (negative) connections of two nodes. 6 Conclusion In this paper, we study the problem of automatically inducing semantically meaningful concept taxonomies from multi-modal data. We propose a probabilistic Bayesian model which leverages distributed representations for images and words. We compare our model and features to previous ones on two different tasks using the ImageNet hierarchies, and demonstrate superior performance of our model, and the effectiveness of exploiting visual contents for taxonomy induction. We further conduct qualitative studies and distinguish the relative importance of visual and textual features in constructing various parts of a taxonomy. Acknowledgements We would like to thank anonymous reviewers for their valuable feedback. We would also like to thank Mohit Bansal for helpful suggestions. We thank NVIDIA for GPU donations. The work is supported by NSF Big Data IIS1447676. References Mohit Bansal, David Burkett, Gerard de Melo, and Dan Klein. 2014. Structured learning for taxonomy induction with belief propagation. Evgeniy Bart, Ian Porteous, Pietro Perona, and Max 1799 Welling. 2008. Unsupervised learning of visual taxonomies. In CVPR. Or Biran and Kathleen McKeown. 2013. Classifying taxonomic relations between pairs of wikipedia articles. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Conference on Artificial Intelligence, number EPFL-CONF-192344. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Xinlei Chen, Abhinav Shrivastava, and Abhinav Gupta. 2013. Neil: Extracting visual knowledge from web data. In CVPR. Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On shortest arborescence of a directed graph. Scientia Sinica. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR. Jia Deng, Nan Ding, Yangqing Jia, Andrea Frome, Kevin Murphy, Samy Bengio, Yuan Li, Hartmut Neven, and Hartwig Adam. 2014. Large-scale object classification using label relation graphs. In ECCV. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In ACL. Chuang Gan, Yi Yang, Linchao Zhu, Deli Zhao, and Yueting Zhuang. Recognizing an action using its name: A knowledge-based approach. International Journal of Computer Vision, pages 1–17. Chuang Gan, Ming Lin, Yi Yang, Yueting Zhuang, and Alexander G Hauptmann. 2015. Exploring semantic inter-class relationships (SIR) for zero-shot action recognition. In AAAI. Gregory Griffin and Pietro Perona. 2008. Learning and using taxonomies for fast visual categorization. In CVPR. Lushan Han, Abhay Kashyap, Tim Finin, James Mayfield, and Jonathan Weese. 2013. Umbc ebiquitycore: Semantic textual similarity systems. Atlanta, Georgia, USA. Sanda M Harabagiu, Steven J Maiorano, and Marius A Pasca. 2003. Open-domain textual question answering techniques. Natural Language Engineering. Andreas Hotho, Alexander Maedche, and Steffen Staab. 2002. Ontology-based text document clustering. Douwe Kiela and L´eon Bottou. 2014. Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In EMNLP. Douwe Kiela, Laura Rimell, Ivan Vulic, and Stephen Clark. 2015. Exploiting image generality for lexical entailment detection. In ACL. Zornitsa Kozareva and Eduard Hovy. 2010. A semi-supervised method to learn and construct taxonomies using the web. In EMNLP. Marcin Marszałek and Cordelia Schmid. 2008. Constructing category hierarchies for visual recognition. In ECCV. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM. Roberto Navigli, Paola Velardi, and Stefano Faralli. 2011. A graph-based algorithm for inducing lexical taxonomies from scratch. In IJCAI. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In ACL. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Josef Sivic, Bryan C Russell, Andrew Zisserman, William T Freeman, and Alexei A Efros. 2008. Unsupervised discovery of visual object class hierarchies. In CVPR. Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In ACL. Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pages 935–943. Luu Anh Tuan, Jung-jae Kim, and Ng See Kiong. 2014. Taxonomy construction using syntactic contextual evidence. In EMNLP. Luu Anh Tuan, Jung-jae Kim, and Ng See Kiong. 2015. Incorporating trustiness and collective synonym/contrastive evidence into taxonomy construction. Wikipedia. 2014. https://dumps.wikimedia. org/enwiki/20141208/. 1800 Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis DeCoste, Wei Di, and Yizhou Yu. 2015. Hd-cnn: Hierarchical deep convolutional neural networks for large scale visual recognition. In ICCV. Hui Yang and Jamie Callan. 2009. A metric-based framework for automatic taxonomy induction. In ACL-IJCNLP. Hao Zhang, Gunhee Kim, and Eric P. Xing. 2015. Dynamic topic modeling for monitoring market competition from online text and image data. In KDD. Bin Zhao, Fei Li, and Eric P Xing. 2011. Large-scale category structure aware image categorization. In NIPS. Xingwei Zhu, Zhao-Yan Ming, Xiaoyan Zhu, and TatSeng Chua. 2013. Topic hierarchy construction for the organization of multi-source user generated contents. In SIGIR. 1801
2016
169
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 172–182, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Neural Greedy Constituent Parsing with Dynamic Oracles Maximin Coavoux1,2 and Benoˆıt Crabb´e1,2,3 1Univ. Paris Diderot, Sorbonne Paris Cit´e 2Alpage, Inria 3Institut Universitaire de France [email protected] [email protected] Abstract Dynamic oracle training has shown substantial improvements for dependency parsing in various settings, but has not been explored for constituent parsing. The present article introduces a dynamic oracle for transition-based constituent parsing. Experiments on the 9 languages of the SPMRL dataset show that a neural greedy parser with morphological features, trained with a dynamic oracle, leads to accuracies comparable with the best non-reranking and non-ensemble parsers. 1 Introduction Constituent parsing often relies on search methods such as dynamic programming or beam search, because the search space of all possible predictions is prohibitively large. In this article, we present a greedy parsing model. Our main contribution is the design of a dynamic oracle for transitionbased constituent parsing. In NLP, dynamic oracles were first proposed to improve greedy dependency parsing training without involving additional computational costs at test time (Goldberg and Nivre, 2012; Goldberg and Nivre, 2013). The training of a transition-based parser involves an oracle, that is a function mapping a configuration to the best transition. Transition-based parsers usually rely on a static oracle, only welldefined for gold configurations, which transforms trees into sequences of gold actions. Training against a static oracle restricts the exploration of the search space to the gold sequence of actions. At test time, due to error propagation, the parser will be in a very different situation than at training time. It will have to infer good actions from noisy configurations. To alleviate error propagation, a solution is to train the parser to predict the best action given any configuration, by allowing it to explore a greater part of the search space at train time. Dynamic oracles are non-deterministic oracles well-defined for any configuration. They give the best possible transitions for any configuration. Although dynamic oracles are widely used in dependency parsing and available for most standard transition systems (Goldberg and Nivre, 2013; Goldberg et al., 2014; G´omez-Rodr´ıguez et al., 2014; Straka et al., 2015), no dynamic oracle parsing model has yet been proposed for phrase structure grammars. The model we present aims at parsing morphologically rich languages (MRL). Recent research has shown that morphological features are very important for MRL parsing (Bj¨orkelund et al., 2013; Crabb´e, 2015). However, traditional linear models (such as the structured perceptron) need to define rather complex feature templates to capture interactions between features. Additional morphological features complicate this task (Crabb´e, 2015). Instead, we propose to rely on a neural network weighting function which uses a non-linear hidden layer to automatically capture interactions between variables, and embeds morphological features in a vector space, as is usual for words and other symbols (Collobert and Weston, 2008; Chen and Manning, 2014). The article is structured as follows. In Section 2, we present neural transition-based parsing. Section 3 motivates learning with a dynamic oracle and presents an algorithm to do so. Section 4 introduces the dynamic oracle. Finally, we present parsing experiments in Section 5 to evaluate our proposal. 2 Transition-Based Constituent Parsing Transition-based parsers for phrase structure grammars generally derive from the work of Sagae 172 A[h] E[e] D[d] X[h] C[c] B[b] A[h] E[e] A:[h] D[d] A:[h] A:[h] X[h] C[c] B[b] Figure 1: Order-0 head markovization. and Lavie (2005). In the present paper, we extend Crabb´e (2015)’s transition system. Grammar form We extract the grammar from a head-annotated preprocessed constituent treebank (cf Section 5). The preprocessing involves two steps. First, unary chains are merged, except at the preterminal level, where at most one unary production is allowed. Second, an order-0 head-markovization is performed (Figure 1). This step introduces temporary symbols in the binarized grammar, which are suffixed by “:”. The resulting productions have one the following form: X[h] →A[a] B[b] X[h] →A[a] b X[h] →h X[h] →a B[b] where X, A, B are delexicalised non-terminals, a, b and h ∈{a, b} are tokens, and X[h] is a lexicalized non-terminal. The purpose of lexicalization is to allow the extraction of features involving the heads of phrases together with their tags and morphological attributes. Transition System In the transition-based framework, parsing relies on two data structures: a buffer containing the sequence of tokens to parse and a stack containing partial instantiated trees. A configuration C = ⟨j, S, b, γ⟩is a tuple where j is the index of the next token in the buffer, S is the current stack, b is a boolean, and γ is the set of constituents constructed so far.1 Constituents are instantiated non-terminals, i.e. tuples (X, i, j) such that X is a non-terminal and (i, j) are two integers denoting its span. Although the content of γ could be retrieved from the stack, we make it explicit because it will be useful for the design of the oracle in Section 4. From an initial configuration C0 = ⟨0, ϵ, ⊥, ∅⟩, the parser incrementally derives new configurations by performing actions until a final configuration is reached. S(HIFT) pops an element from the 1The introduction of γ is the main difference with Crabb´e (2015)’s transition system. Stack: S|(C, l, i)|(B, i, k)|(A, k, j) Action Constraints RL(X) or RR(X), X∈N A/∈Ntmp and B /∈Ntmp RL(X:) or RR(X:), X:∈Ntmp C/∈Ntmp or j < n RR(X) B/∈Ntmp RL(X) A/∈Ntmp Table 1: Constraints to ensure that binary trees can be unbinarized. n is the sentence length. Input w0w1 . . . wn−1 Axiom ⟨0, ϵ, ⊥, ∅⟩ S ⟨j, S, ⊥, γ⟩ ⟨j + 1, S|(tj, j, j + 1), ⊤, γ⟩ RL(X) ⟨j, S|(A, i, k)|(B, k, j), ⊥, γ⟩ ⟨j, S|(X, i, j), ⊥, γ ∪{(X, i, j)}⟩ RU(X) ⟨j, S|(tj−1, j −1, j), ⊤, γ⟩ ⟨j, S|(X, j −1, j), ⊥, γ ∪{(X, j −1, j)}⟩ GR ⟨j, S, ⊤, γ⟩ ⟨j, S, ⊥, γ⟩ Figure 2: Transition system, the transition RR(X) and the lexicalization of symbols are omitted. buffer and pushes it on the stack. R(EDUCE)(X) pops two elements from the stack, and pushes a new non-terminal X on the stack with the two elements as its children. There are two kinds of binary reductions, left (RL) or right (RR), depending on the position of the head. Finally, unary reductions (RU(X)) pops only one element from the stack and pushes a new non-terminal X. A derivation C0⇒τ = C0 a0 ⇒. . . aτ−1 ⇒ Cτ is a sequence of configurations linked by actions and leading to a final configuration. Figure 2 presents the algorithm as a deductive system. G(HOST)R(EDUCE) actions and boolean b (⊤or ⊥) are used to ensure that unary reductions (RU) can only take place once after a SHIFT action.2 Constraints on the transitions make sure that predicted trees can be unbinarized. Figure 3 shows two examples of trees that could not have been obtained by the binarization process. In the first tree, a temporary symbol rewrites as two tempo2This transition system is similar to the extended system of Zhu et al. (2013). The main difference is the strategy used to deal with unary reductions. Our strategy ensures that derivations for a sentence all have the same number of steps, which can have an effect when using beam search. We use a GHOST-REDUCE action, whereas they use a padding strategy with an IDLE action. 173 A[h] C:[c] A:[h] A[h] C[h] A:[a] Figure 3: Examples of ill-formed binary trees rary symbols. In the second one, the head of a temporary symbol is not the head of its direct parent. Table 1 shows a summary of the constraints used to ensure that any predicted tree is a wellformed binarized tree.3 In this table, N is the set of non-terminals and Ntmp ⊂N is the set of temporary non-terminals. Weighted Parsing The deductive system is inherently non-deterministic. Determinism is provided by a scoring function s(C0⇒τ) = τ X i=1 fθ(Ci−1, ai) where θ is a set of parameters. The score of a derivation decomposes as a sum of scores of actions. In practice, we used a feed-forward neural network very similar to the scoring model of Chen and Manning (2014). The input of the network is a sequence of typed symbols. We consider three main types (non-terminals, tags and terminals) plus a language-dependent set of morphological attribute types, for example, gender, number, or case (Crabb´e, 2015). The first layer h(0) is a lookup layer which concatenates the embeddings of each typed symbol extracted from a configuration. The second layer h(1) is a non-linear layer with a rectifier activation (ReLU). Finally, the last layer h(2) is a softmax layer giving a distribution over possible actions, given a configuration. The score of an action is its log probability. Assuming v1, v2 . . . , vα are the embeddings of the sequence of symbols extracted from a configuration, the forward pass is summed up by the following equations: h(0) = [v1; v2; . . . ; vα] h(1) = max{0, W(h) · h(0) + b(h)} h(2) = Softmax(W(o) · h(1) + b(o)) fθ(C, a) = log(h(2) a ) 3There are additional constraints which are not presented here. For example, SHIFT assumes that the buffer is not empty. A full description of constraints typically used in a slightly different transition system can be found in Zhang and Clark (2009)’s appendix section. s2.ct[s2.wt] s1.ct[s1.wt] s1.cl[s1.wl] s1.cr[s1.wr] s0.ct[s0.wt] s0.cl[s0.wl] s0.cr[s0.wr] q1 . . . q4 | {z } stack | {z } queue Figure 4: Schematic representation of local elements in a configuration. Thus, θ includes the weights and biases for each layer (W(h), W(o), b(h), b(o)), and the embedding lookup table for each symbol type. We perform greedy search to infer the bestscoring derivation. Note that this is not an exact inference. Most propositions in phrase structure parsing rely on dynamic programming (Durrett and Klein, 2015; Mi and Huang, 2015) or beam search (Crabb´e, 2015; Watanabe and Sumita, 2015; Zhu et al., 2013). However we found that with a scoring function expressive enough and a rich feature set, greedy decoding can be surprisingly accurate (see Section 5). Features Each terminal is a tuple containing the word form, its part-of-speech tag and an arbitrary number of language-specific morphological attributes, such as CASE, GENDER, NUMBER, ASPECT and others (Seddah et al., 2013; Crabb´e, 2015). The representation of a configuration depends on symbols at the top of the two data structures, including the first tokens in the buffer, the first lexicalised non-terminals in the stack and possibly their immediate descendants (Figure 4). The full set of templates is specified in Table 6 of Annex A. The sequence of symbols that forms the input of the network is the instanciation of each position described in this table with a discrete symbol. 3 Training a Greedy Parser with an Oracle An important component for the training of a parser is an oracle, that is a function mapping a gold tree and a configuration to an action. The oracle is used to generate local training examples from trees, and feed them to the local classifier. A static oracle (Goldberg and Nivre, 2012) is an incomplete and deterministic oracle. It is only well-defined for gold configurations (the configurations derived by the gold action sequence) and returns the unique gold action. Usually, parsers use a static oracle to transform the set of binarized trees into a set D = {C(i), a(i)}1≤i≤T of training examples. Training consists in minimiz174 ing the negative log likelihood of these examples. The limitation of this training method is that only gold configurations are seen during training. At test time, due to error propagation, the parser will have to predict good actions from noisy configurations, and will have much difficulty to recover after mistakes. To alleviate this problem, a line of work (Daum´e III et al., 2006; Ross et al., 2011) has cast the problem of structured prediction as a search problem and developed training algorithms aiming at exploring a greater part of the search space. These methods require an oracle well-defined for every search state, that is, for every parsing configuration. A dynamic oracle is a complete and nondeterministic oracle (Goldberg and Nivre, 2012). It returns the non-empty set of the best transitions given a configuration and a gold tree. In dependency parsing, starting from Goldberg and Nivre (2012), dynamic oracle algorithms and training methods have been proposed for a variety of transition systems and led to substantial improvements in accuracy (Goldberg and Nivre, 2013; Goldberg et al., 2014; G´omez-Rodr´ıguez et al., 2014; Straka et al., 2015; G´omez-Rodr´ıguez and Fern´andezGonz´alez, 2015). Online training An online trainer iterates several times over each sentence in the treebank, and updates its parameters until convergence. When a static oracle is used, the training examples can be pregenerated from the sentences. When we use a dynamic oracle instead, we generate training examples on the fly, by following the prediction of the parser (given the current parameters) instead of the gold action, with probability p, where p is a hyperparameter which controls the degree of exploration. The online training algorithm for a single sentence s, with an oracle function o is shown in Figure 5. It is a slightly modified version of Goldberg and Nivre (2013)’s algorithm 3, an approach they called learning with exploration. In particular, as our neural network uses a crossentropy loss, and not the perceptron loss used in Goldberg and Nivre (2013), updates are performed even when the prediction is correct. When p = 0, the algorithm acts identically to a static oracle trainer, as the parser always follows the gold transition. When the set of actions predicted by the oracle has more than one element, the best scoring element among them is chosen as the reference function TRAINONESENTENCE(s, θ, p, o) C ←INITIAL(s) while C is not a final configuration do A ←o(C, s) ▷set of best actions ˆa ←argmaxa fθ(C)a if ˆa ∈A then t ←ˆa ▷t: target else t ←argmaxa∈A fθ(C)a θ ←UPDATE(θ, C, t) ▷backprop if RANDOM() < p then C ←ˆa(C) ▷Follow prediction else C ←t(C) ▷Follow best action return θ Figure 5: Online training for a single annotated sentence s, using an oracle function o. action to update the parameters of the neural network. 4 A Dynamic Oracle for Transition-Based Parsing This section introduces a dynamic oracle algorithm for the parsing model presented in the previous 2 sections, that is the function o used in the algorithm in Figure 5. The dynamic oracle must minimize a cost function L(c; t, T) computing the cost of applying transition t in configuration c, with respect to a gold parse T. As is shown by Goldberg and Nivre (2013), the oracle’s correctness depends on the cost function. A correct dynamic oracle o will have the following general formulation: o(c, T) = {t|L(c; t, T) = min t′ L(c; t′, T)} (1) The correctness of the oracle is not necessary to improve training. The oracle needs only to be good enough (Daum´e et al., 2009), which is confirmed by empirical results (Straka et al., 2015). Goldberg and Nivre (2013) identified arc-decomposability, a powerful property of certain dependency parsing transition systems for which we can easily derive correct efficient oracles. When this property holds, we can infer whether a tree is reachable from the reachability of individual arcs. This simplifies the calculation of each transition cost. We rely on an analogue property we call constituent decomposition. A 175 set of constituents is tree-consistent if it is a subset of a set corresponding to a well-formed tree. A phrase structure transition system is constituentdecomposable iff for any configuration C and any tree-consistent set of constituents γ, if every constituent in γ is reachable from C, then the whole set is reachable from C (constituent reachability will be formally defined in Section 4.1). The following subsections are structured as follows. First of all, we present a cost function (Section 4.1). Then, we derive a correct dynamic oracle algorithm for an ideal case where we assume that there is no temporary symbols in the grammar (Section 4.2). Finally, we present some heuristics to define a dynamic oracle for the general case (Section 4.3). 4.1 Cost Function The cost function we use ignores the lexicalization of the symbols. For the sake of simplicity, we momentarily leave apart the headedness of the binary reductions (until the last paragraph of Section 4) and assume a unique binary REDUCE action. For the purpose of defining a cost function for transitions, we adopt a representation of trees as sets of constituents. For example, (S (NP (D the) (N cat)) (VP (V sleeps))) corresponds to the set {(S, 0, 3), (NP, 0, 2), (VP, 2, 3)}. As is shown in Figure 2, every reduction action (unary or binary) adds a new constituent to the set γ of already predicted constituents, which was introduced in Section 2. We define the cost of a predicted set of constituents ˆγ with respect to a gold set γ∗as the number of constituents in γ∗which are not in ˆγ penalized by the number of predicted unary constituents which are not in the gold set: Lr(ˆγ, γ∗) = |γ∗−ˆγ| + |{(X, i, i + 1) ∈ˆγ|(X, i, i + 1) /∈γ∗}| (2) The first term penalizes false negatives and the second one penalizes unary false positives. The number of binary constituents in γ∗and ˆγ depends only on the sentence length n, thus binary false positives are implicitly taken into account by the fist term. The cost of a transition and that of a configuration are based on constituent reachability. The relation C ⊢C′ holds iff C′ can be deduced from C by performing a transition. Let ⊢∗denote the reflexive transitive closure of ⊢. A set of constituents γ (possibly a singleton) is reachable from a configuration C iff there is a configuration C′ = ⟨j, S, b, γ′⟩such that C ⊢∗C′ and γ ⊆γ′, which we write C ; γ. Then, the cost of an action t for a configuration C is the cost difference between the best tree reachable from t(C) and the best tree reachable from C: Lr(t; C, γ∗) = min γ:t(C);γ L(γ, γ∗)−min γ:C;γ L(γ, γ∗) This cost function is easily decomposable (as a sum of costs of transitions) whereas F1 measure is not. By definition, for each configuration, there is at least one transition with cost 0 with respect to the gold parse. Otherwise, it would entail that there is a tree reachable from C but unreachable from t(C), for any t. Therefore, we reformulate equation 1: o(C, γ∗) = {t|Lr(C; t, γ∗) = 0} (3) In the transition system, the grammar is left implicit: any reduction is allowed (even if the corresponding grammar rule has never been seen in the training corpus). However, due to the introduction of temporary symbols during binarization, there are constraints to ensure that any derivation corresponds to a well-formed unbinarized tree. These constraints make it difficult to test the reachability of constituents. For this reason, we instantiate two transition systems. We call SR-TMP the transition system in Figure 2 which enforces the constraints in Table 1, and SR-BIN, the same transition system without any of such constraints. SR-BIN assumes an idealized case where the grammar contains no temporary symbols, whereas SR-TMP is the actual system we use in our experiments. 4.2 A Correct Oracle for SR-BIN Transition System SR-BIN transition system provides no guarantees that predicted trees are unbinarisable. The only condition for a binary reduction to be allowed is that the stack contains at least two symbols. If so, any non-terminal in the grammar could be used. In such a case, we can define a simple necessary and sufficient condition for constituent reachability. Constituent reachability Let γ∗be a treeconsistent constituent set, and C = ⟨j, S, b, γ⟩a 176 parsing configuration, such that: S = (X1, i0, i1) . . . (Xp, ip−1, i)|(A, i, k)|(B, k, j) A binary constituent (X, m, n) is reachable iff it satisfies one of the three following properties : 1. (X, m, n) ∈γ 2. j < m < n 3. m ∈{i0, . . . ip−1, i, k}, n ≥j and (m, n) ̸= (k, j) The first two cases are trivial and correspond respectively to a constituent already constructed and to a constituent spanning words which are still in the buffer. In the third case, (X, m, n) can be constructed by performing n −j times the transitions SHIFT and GHOST-REDUCE (or REDUCE-UNARY), and then a sequence of binary reductions ended by an X reduction. Note that as the index j in the configuration is non-decreasing during a derivation, the constituents whose span end is inferior to j are not reachable if they are not already constructed. For a unary constituent, the condition for reachability is straightforward: a constituent (X, i −1, i) is reachable from configuration C = ⟨j, S, b, γ⟩iff (X, i −1, i) ∈γ or i > j or i = j ∧b = ⊤. Constituent decomposability SR-BIN is constituent decomposable. In this paragraph, we give some intuition about why this holds. Reasoning by contradiction, let’s assume that every constituent of a tree-consistent set γ∗is reachable from C = ⟨j, S|(A, i, k)|(B, k, j), b, γ⟩and that γ∗is not reachable (contraposition). This entails that at some point during a derivation, there is no possible transition which maintains reachability for all constituents of γ∗. Let’s assume C is in such a case. If some constituent of γ∗is reachable from C, but not from SHIFT(C), its span must have the form (m, j), where m ≤i. If some constituent of γ∗is reachable from C, but not from REDUCE(X)(C), for any label X, its span must have the form (k, n), where n > j. If both conditions hold, γ∗contains incompatible constituents (crossing brackets), which contradicts the assumption that γ∗is tree-consistent. Computing the cost of a transition The conditions on constituent reachability makes it easy to compute the cost of a transition t for a given configuration C = ⟨j, S|(A, i, k)|(B, k, j), b, γ⟩and a gold set γ∗: 1: function O(⟨j, S|(A, i, k)|(B, k, j), b, γ⟩, γ∗) 2: if b = ⊤then ▷Last action was SHIFT 3: if (X, j −1, j) ∈γ∗then 4: return {REDUCEUNARY(X)} 5: else 6: return {GHOSTREDUCE} 7: if ∃n > j, (X, k, n) ∈γ∗then 8: return {SHIFT} 9: if (X, i, j) ∈γ∗then 10: return {REDUCE(X)} 11: if ∃m < i, (X, m, j) ∈γ∗then 12: return {REDUCE(Y), ∀Y } 13: return {a ∈A|a is a possible action} Figure 6: Oracle algorithm for SR-BIN. • The cost of a SHIFT is the number of constituents not in γ, reachable from C and whose span ends in j. • The cost of a binary reduction REDUCE(X) is a sum of two terms. The first one is the number of constituents of γ∗whose span has the form (k, n) with n > j. These are no longer compatible with (X, i, j) in a tree. The second one is one if (Y, i, j) ∈γ∗and Y ̸= X and zero otherwise. It is the cost of mislabelling a constituent with a gold span. • The cost of a unary reduction or that of a ghost reduction can be computed straightforwardly by looking at the gold set of constituents. We present in Figure 6 an oracle algorithm derived from these observations. 4.3 A Heuristic-based Dynamic Oracle for SR-TMP transition system The conditions for constituent reachability for SRBIN do not hold any longer for SR-TMP. In particular, constituent reachability depends crucially on the distinction between temporary and nontemporary symbols. The algorithm in Figure 6 is not correct for this transition system. In Figure 7, we give an illustration of a prototypical case in which the algorithm in Figure 6 will fail. The constituent (C:, i, j) is in the gold set of constituents and could be constructed with REDUCE(C:). The third symbol on the stack being temporary symbol D:, the reduction to a temporary symbol will jeopardize the reachability of (C, m, j) because reduc177 tions are not possible when the two symbols at the top of the stack are temporary symbols. The best course of action is then a reduction to any non-temporary symbol, so as to keep (C, m, j) reachable. Note that in this case, the cost of REDUCE(C:) cannot be smaller than that of a single mislabelled constituent. In fact, this example shows that the constraints inherent to SR-TMP makes it non constituentdecomposable. In the example in Figure 7, both constituents in the set {(C, m, j), (C:, i, j)}, a tree-consistent constituent set, is reachable. However, the whole set is not reachable, as REDUCE(C:) would make (C, m, j) not reachable. In dependency parsing, several exact dynamic oracles have been proposed for non arcdecomposable transition systems (Goldberg et al., 2014), including systems for non-projective parsing (G´omez-Rodr´ıguez et al., 2014). These oracles rely on tabular methods to compute the cost of transitions and have (high-degree) polynomial worst case running time. Instead, to avoid resorting to more computationally expensive exact methods, we adapt the algorithm in Figure 6 to the constraints involving temporary symbols using the following heuristics: • If the standard oracle predicts a reduction, make sure to choose its label so that every reachable constituent (X, m, j) ∈γ∗(m < i) is still reachable after the transition. Practically, if such constituent exists and if the third symbol on the stack is a temporary symbol, then do not predict a temporary symbol. • When reductions to both temporary symbols and non-temporary symbols have cost zero, only predict temporary symbols. This should not harm training and improve precision for the unbinarized tree, as any non temporary Configuration stack Gold tree D:m,i Ai,k Bk,j Cm,j Dm,i C:i,j Ai,k Bk,j Figure 7: Problematic case. Due to the temporary symbol constraints enforced by SR-TMP, the algorithm in Figure 6 will fail on this example. Dev F1 (EVALB) Decoding toks/sec static (this work) 88.6 greedy dynamic (this work) 89.0 greedy Test F1 (EVALB) Hall et al. (2014) 89.2 CKY 12 Berkeley (Petrov et al., 2006) 90.1 CKY 169 Durrett and Klein (2015)† 91.1 CKY Zhu et al. (2013)† 91.3 beam=16 1,290 Crabb´e (2015) 90.0 beam=8 2,150 Sagae and Lavie (2006) 85.1 greedy static (this work) 88.0 greedy 3,820⋄ dynamic (this work) 88.6 greedy 3,950⋄ Table 3: Results on the Penn Treebank (Marcus et al., 1993). † use clusters or word vectors learned on unannotated data. ⋄different architecture (2.3Ghz Intel), single processor. symbol in the binarized tree corresponds to a constituent in the n-ary tree. Head choice In some cases, namely when reducing two non-temporary symbols to a new constituent (X, i, j), the oracle must determine the head position in the reduction (REDUCE-RIGHT or REDUCE-LEFT). We used the following heuristic: if (X, i, j) is in the gold set, choose the same head position, otherwise, predict both RR(X) and RL(X) to keep the non-determinism. 5 Experiments We conducted parsing experiments to evaluate our proposal. We compare two experimental settings. In the ‘static’ setting, the parser is trained only on gold configurations; in the ‘dynamic’ setting, we use the dynamic oracle and the training method in Figure 5 to explore non-gold configurations. We used both the SPMRL dataset (Seddah et al., 2013) in the ‘predicted tag’ scenario, and the Penn Treebank (Marcus et al., 1993), to compare our proposal to existing systems. The tags and morphological attributes were predicted using Marmot (Mueller et al., 2013), by 10-fold jackknifing for the train and development sets. For the SPMRL dataset, the head annotation was carried out with the procedures described in Crabb´e Number of possible values ≤8 ≤32 > 32 Dimensions for embedding 4 8 16 Table 4: Size of morphological attributes embeddings. 178 Arabic Basque French German Hebrew Hungarian Korean Polish Swedish Avg Decoding Development F1 (EVALBSPMRL) Durrett and Klein (2015)† CKY 80.68 84.37 80.65 85.25 89.37 89.46 82.35 92.10 77.93 84.68 Crabb´e (2015) beam=8 81.25 84.01 80.87 84.08 90.69 88.27 83.09 92.78 77.87 84.77 static (this work) greedy 80.25 84.29 79.87 83.99 89.78 88.44 84.98 92.38 76.63 84.51 dynamic (this work) greedy 80.94 85.17 80.31 84.61 90.20 88.70 85.46 92.57 77.87 85.09 Test F1 (EVALBSPMRL) Bj¨orkelund et al. (2014)† 81.32∗ 88.24 82.53 81.66 89.80 91.72 83.81 90.50 85.50 86.12 Berkeley (Petrov et al., 2006) CKY 79.19 70.50 80.38 78.30 86.96 81.62 71.42 79.23 79.18 78.53 Berkeley-Tags CKY 78.66 74.74 79.76 78.28 85.42 85.22 78.56 86.75 80.64 80.89 Durrett and Klein (2015)† CKY 80.24 85.41 81.25 80.95 88.61 90.66 82.23 92.97 83.45 85.09 Crabb´e (2015) beam=8 81.31 84.94 80.84 79.26 89.65 90.14 82.65 92.66 83.24 84.97 Fern´andez-Gonz´alez and Martins (2015) 85.90 78.75 78.66 88.97 88.16 79.28 91.20 82.80 (84.22) static (this work) greedy 79.77 85.91 79.62 79.20 88.64 90.54 84.53 92.69 81.45 84.71 dynamic (this work) greedy 80.71 86.24 79.91 80.15 88.69 90.51 85.10 92.96 81.74 85.11 dynamic (this work) beam=2 81.14 86.45 80.32 80.68 89.06 90.74 85.17 93.15 82.65 85.48 dynamic (this work) beam=4 81.59 86.45 80.48 80.69 89.18 90.73 85.31 93.13 82.77 85.59 dynamic (this work) beam=8 81.80 86.48 80.56 80.74 89.24 90.76 85.33 93.13 82.80 85.64 Table 2: Results on development and test corpora. Metrics are provided by evalb spmrl with spmrl.prm parameters (http://www.spmrl.org/spmrl2013-sharedtask.html). † use clusters or word vectors learned on unannotated data. ∗Bj¨orkelund et al. (2013). (2015), using the alignment between dependency treebanks and constituent treebanks. For English, we used Collins’ head annotation rules (Collins, 2003). Our system is entirely supervised and uses no external data. Every embedding was initialised randomly (uniformly) in the interval [−0.01, 0.01]. Word embeddings have 32 dimensions, tags and non-terminal embeddings have 16 dimensions. The dimensions of the morphological attributes depend on the number of values they can have (Table 4). The hidden layer has 512 units.4 For the ‘dynamic’ setting, we trained every other k sentence with the dynamic oracle and the other sentences with the static oracle. This method, used by Straka et al. (2015), allows for high values of p, without slowing or preventing convergence. We used several hyperparameters combinations (see Table 5 of Annex A). For each language, we present the model with the combination which maximizes the developement set Fscore. We used Averaged Stochastic Gradient Descent (Polyak and Juditsky, 1992) to minimize the negative log likelihood of the training examples. We shuffled the sentences in the training set before each iteration. Results Results for English are shown in Table 3. The use of the dynamic oracle improves F-score 4We did not tune these hyperparameters for each language. Instead, we chose a set of hyperparameters which achieved a tradeoff between training time and model accuracy. The effect of the morphological features and their dimensionality are left to future work. by 0.4 on the development set and 0.6 on the test set. The resulting parser, despite using greedy decoding and no additional data, is quite accurate. For example, it compares well with Hall et al. (2014)’s span based model and is much faster. For the SPMRL dataset, we report results on the development sets and test sets in Table 2. The metrics take punctuation and unparsed sentences into account (Seddah et al., 2013). We compare our results with the SPMRL shared task baselines (Seddah et al., 2013) and several other parsing models. The model of Bj¨orkelund et al. (2014) obtained the best results on this dataset. It is based on a product grammar and a discriminative reranker, together with morphological features and word clusters learned on unannotated data. Durrett and Klein (2015) use a neural CRF based on CKY decoding algorithm, with word embeddings pretrained on unannotated data. Fern´andez-Gonz´alez and Martins (2015) use a parsing-as-reduction approach, based on a dependency parser with a label set rich enough to reconstruct constituent trees from dependency trees. Finally, Crabb´e (2015) uses a structured perceptron with rich features and beam-search decoding. Both Crabb´e (2015) and Bj¨orkelund et al. (2014) use MARMOT-predicted morphological tags (Mueller et al., 2013), as is done in our experiments. Our results show that, despite using a very simple greedy inference and being strictly supervised, our base model (static oracle training) is competitive with the best single parsers on this dataset. 179 We hypothesize that these surprising results come both from the neural scoring model and the morphological attribute embeddings (especially for Basque, Hebrew, Polish and Swedish). We did not test these hypotheses systematically and leave this investigation for future work. Furthermore, we observe that the dynamic oracle improves training by up to 0.6 F-score (averaged over all languages). The improvement depends on the language. For example, Swedish, Arabic, Basque and German are the languages with the most important improvement. In terms of absolute score, the parser also achieves very good results on Korean and Basque, and even outperforms Bj¨orkelund et al. (2014)’s reranker on Korean. Combined effect of beam and dynamic oracle Although initially, dynamic oracle training was designed to improve parsing without relying on more complex search methods (Goldberg and Nivre, 2012), we tested the combined effects of dynamic oracle training and beam search decoding. In Table 2, we provide results for beam decoding with the already trained local models in the ‘dynamic’ setting. The transition from greedy search to a beam of size two brings an improvement comparable to that of the dynamic oracle. Further increase in beam size does not seem to have any noticeable effect, except for Arabic. These results show that effects of the dynamic oracle and beam decoding are complementary and suggest that a good tradeoff between speed and accuracy is already achieved in a greedy setting or with a very small beam size 6 Conclusion We have described a dynamic oracle for constituent parsing. Experiments show that training a parser against this oracle leads to an improvement in accuracy over a static oracle. Together with morphological features, we obtain a greedy parser as accurate as state-of-the-art (non reranking) parsers for morphologically-rich languages. Acknowledgments We thank the anonymous reviewers, along with H´ector Mart´ınez Alonso and Olga Seminck for valuable suggestions to improve prior versions of this article. References Anders Bj¨orkelund, ¨Ozlem C¸ etino˘glu, Rich´ard Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 135– 145, Seattle, Washington, USA, October. Association for Computational Linguistics. Anders Bj¨orkelund, ¨Ozlem C¸ etino˘glu, Agnieszka Fale´nska, Rich´ard Farkas, Thomas Mueller, Wolfgang Seeker, and Zsolt Sz´ant´o. 2014. Introducing the ims-wrocław-szeged-cis entry at the spmrl 2014 shared task: Reranking and morpho-syntax meet unlabeled data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 97–102, Dublin, Ireland, August. Dublin City University. L´eon Bottou. 2010. Large-Scale Machine Learning with Stochastic Gradient Descent. In Yves Lechevallier and Gilbert Saporta, editors, Proceedings of COMPSTAT’2010, pages 177–186. PhysicaVerlag HD. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Empirical Methods in Natural Language Processing (EMNLP). Michael Collins. 2003. Head-driven statistical models for natural language parsing. Comput. Linguist., 29(4):589–637, December. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 160–167, New York, NY, USA. ACM. Benoit Crabb´e. 2015. Multilingual discriminative lexicalized phrase structure parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1847–1856, Lisbon, Portugal, September. Association for Computational Linguistics. Hal Daum´e, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning, 75(3):297–325. Hal Daum´e III, John Langford, and Daniel Marcu. 2006. Searn in practice. Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of the Association for Computational Linguistics, Beijing, China, July. Association for Computational Linguistics. Daniel Fern´andez-Gonz´alez and Andr´e F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 180 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1523–1533, Beijing, China, July. Association for Computational Linguistics. Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proceedings of COLING 2012, pages 959–976, Mumbai, India, December. The COLING 2012 Organizing Committee. Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the Association for Computational Linguistics, 1:403–414. Yoav Goldberg, Francesco Sartorio, and Giorgio Satta. 2014. A tabular method for dynamic oracles in transition-based parsing. TACL, 2:119–130. Carlos G´omez-Rodr´ıguez and Daniel Fern´andezGonz´alez. 2015. An efficient dynamic oracle for unrestricted non-projective parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 256–261, Beijing, China, July. Association for Computational Linguistics. Carlos G´omez-Rodr´ıguez, Francesco Sartorio, and Giorgio Satta. 2014. A polynomial-time dynamic oracle for non-projective dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 917–927, Doha, Qatar, October. Association for Computational Linguistics. David Hall, Greg Durrett, and Dan Klein. 2014. Less grammar, more features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, Maryland, June. Association for Computational Linguistics. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Haitao Mi and Liang Huang. 2015. Shift-reduce constituency parsing with dynamic programming and pos tag lattice. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1030–1035, Denver, Colorado, May–June. Association for Computational Linguistics. Thomas Mueller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322–332, Seattle, Washington, USA, October. Association for Computational Linguistics. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433–440, Sydney, Australia, July. Association for Computational Linguistics. B. T. Polyak and A. B. Juditsky. 1992. Acceleration of stochastic approximation by averaging. SIAM J. Control Optim., 30(4):838–855, July. St´ephane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Geoffrey J. Gordon, David B. Dunson, and Miroslav Dud´ık, editors, AISTATS, volume 15 of JMLR Proceedings, pages 627–635. JMLR.org. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 125–132. Association for Computational Linguistics. Kenji Sagae and Alon Lavie. 2006. A best-first probabilistic shift-reduce parser. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 691–698. Association for Computational Linguistics. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D. Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi´orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, Alina Wr´oblewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146– 182, Seattle, Washington, USA, October. Association for Computational Linguistics. Milan Straka, Jan Hajiˇc, Jana Strakov´a, and Jan Hajiˇc jr. 2015. Parsing universal dependency treebanks using neural networks and search-based oracle. In Proceedings of Fourteenth International Workshop on Treebanks and Linguistic Theories (TLT 14), December. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing 181 (Volume 1: Long Papers), pages 1169–1179, Beijing, China, July. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2009. Transitionbased parsing of the chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies, IWPT ’09, pages 162–171, Stroudsburg, PA, USA. Association for Computational Linguistics. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In ACL (1), pages 434– 443. The Association for Computer Linguistics. A Supplementary Material ‘static’ and ‘dynamic’ setting ‘dynamic’ setting learning rate α iterations k p {0.01, 0.02} {0, 10−6} [1, 24] {8, 16} {0.5, 0.9} Table 5: Hyperparameters. α is the decrease constant used for the learning rate (Bottou, 2010). s0.ct s0.wt.tag s0.wt.form q1.tag s0.cl s0.wl.tag s0.wl.form q2.tag s0.cr s0.wr.tag s0.wr.form q3.tag s1.ct s1.wt.tag s1.wt.form q4.tag s1.cl s1.wl.tag s1.wl.form q1.form s1.cr s1.wr.tag s1.wr.form q2.form s2.ct s2.wt.tag s2.wt.form q3.form q4.form s0.wt.m∀m ∈M q0.m∀m ∈M s1.wt.m∀m ∈M q1.m∀m ∈M Table 6: These templates specify a list of addresses in a configuration. The input of the neural network is the instanciation of each address by a discrete typed symbol. Each vi (Section 2) is the embedding of the ith instantiated symbol of this list. M is the set of all available morphological attributes for a given language. We use the following notations (cf Figure 4): si is the ith item in the stack, c denotes non-terminals, top, left and right, indicate the position of an element in the subtree. Finally, w and q are respectively stack and buffer tokens. 182
2016
17
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1802–1813, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Generating Natural Questions About an Image Nasrin Mostafazadeh1, Ishan Misra2, Jacob Devlin3, Margaret Mitchell3 Xiaodong He3, Lucy Vanderwende3 1 University of Rochester, 2 Carnegie Mellon University, 3 Microsoft Research [email protected], [email protected] Abstract There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images. These tasks have focused on literal descriptions of the image. To move beyond the literal, we choose to explore how questions about an image are often directed at commonsense inference and the abstract events evoked by objects in the image. In this paper, we introduce the novel task of Visual Question Generation (VQG), where the system is tasked with asking a natural and engaging question when shown an image. We provide three datasets which cover a variety of images from object-centric to event-centric, with considerably more abstract training data than provided to state-of-the-art captioning systems thus far. We train and test several generative and retrieval models to tackle the task of VQG. Evaluation results show that while such models ask reasonable questions for a variety of images, there is still a wide gap with human performance which motivates further work on connecting images with commonsense knowledge and pragmatics. Our proposed task offers a new challenge to the community which we hope furthers interest in exploring deeper connections between vision & language. 1 Introduction We are witnessing a renewed interest in interdisciplinary AI research in vision & language, from descriptions of the visual input such as image captioning (Chen et al., 2015; Fang et al., 2014; Donahue et al., 2014; Chen et al., 2015) and video Natural Questions: - Was anyone injured in the crash? - Is the motorcyclist alive? - What caused this accident? Generated Caption: - A man standing next to a motorcycle. Figure 1: Example image along with its natural questions and automatically generated caption. transcription (Rohrbach et al., 2012; Venugopalan et al., 2015), to testing computer understanding of an image through question answering (Antol et al., 2015; Malinowski and Fritz, 2014). The most established work in the vision & language community is ‘image captioning’, where the task is to produce a literal description of the image. It has been shown (Devlin et al., 2015; Fang et al., 2014; Donahue et al., 2014) that a reasonable language modeling paired with deep visual features trained on large enough datasets promise a good performance on image captioning, making it a less challenging task from language learning perspective. Furthermore, although this task has a great value for communities of people who are low-sighted or cannot see in all or some environments, for others, the description does not add anything to what a person has already perceived. The popularity of the image sharing applications in social media and user engagement around images is evidence that commenting on pictures is a very natural task. A person might respond to an image with a short comment such as ‘cool’, ‘nice pic’ or ask a question. Imagine someone has shared the image in Figure 1. What is the very first question that comes to mind? Your question is most probably very similar to the questions listed next to the image, expressing concern about the motorcyclist (who is not even present in the image). As you can tell, natural questions are not 1802 about what is seen, the policemen or the motorcycle, but rather about what is inferred given these objects, e.g., an accident or injury. As such, questions are often about abstract concepts, i.e., events or states, in contrast to the concrete terms1 used in image captioning. It is clear that the corresponding automatically generated caption2 for Figure 1 presents only a literal description of objects. To move beyond the literal description of image content, we introduce the novel task of Visual Question Generation (VQG), where given an image, the system should ‘ask a natural and engaging question’. Asking a question that can be answered simply by looking at the image would be of interest to the Computer Vision community, but such questions are neither natural nor engaging for a person to answer and so are not of interest for the task of VQG. Learning to ask questions is an important task in NLP and is more than a syntactic transformation of a declarative sentence (Vanderwende, 2008). Deciding what to ask about demonstrates understanding and as such, question generation provides an indication of machine understanding, just as some educational methods assess students’ understanding by their ability to ask relevant questions3. Furthermore, training a system to ask a good question (not only answer a question) may imbue the system with what appears to be a cognitive ability unique to humans among other primates (Jordania, 2006). Developing the ability to ask relevant and to-the-point questions can be an essential component of any dynamic learner which seeks information. Such an ability can be an integral component of any conversational agent, either to engage the user in starting a conversation or to elicit taskspecific information. The contributions of this paper can be summarized as follows: (1) in order to enable the VQG research, we carefully created three datasets with a total of 75,000 questions, which range from object- to event-centric images, where we show that VQG covers a wide range of abstract terms including events and states (Section 3). (2) we collected 25,000 gold captions for our eventcentric dataset and show that this dataset presents 1Concrete terms are the ones that can be experienced with five senses. Abstract terms refer to intangible things, such as feelings, concepts, and qualities 2Throughout this paper we use the state-of-the-art captioning system (Fang et al., 2014), henceforth MSR captioning system https://www.captionbot.ai/, to generate captions. 3http://rightquestion.org/ challenges to the state-of-the-art image captioning models (Section 3.3). (3) we perform analysis of various generative and retrieval approaches and conclude that end-to-end deep neural models outperform other approaches on our most-challenging dataset (Section 4). (4) we provide a systematic evaluation methodology for this task, where we show that the automatic metric ∆BLEU strongly correlates with human judgments (Section 5.3). The results show that while our models learn to generate promising questions, there is still a large gap to match human performance, making the generation of relevant and natural questions an interesting and promising new challenge to the community. 2 Related Work For the task of image captioning, datasets have primarily focused on objects, e.g. Pascal VOC (Everingham et al., 2010) and Microsoft Common Objects in Context (MS COCO) (Lin et al., 2014). MS COCO, for example, includes complex everyday scenes with 91 basic objects in 328k images, each paired with 5 captions. Event detection is the focus in video processing and action detection, but these do not include a textual description of the event (Yao et al., 2011b; Andriluka et al., 2014; Chao et al., 2015; Xiong et al., 2015). The number of actions in each of these datasets is still relatively small, ranging from 40 (Yao et al., 2011a) to 600 (Chao et al., 2015) and all involve humanoriented activity (e.g. ‘cooking’, ‘gardening’, ‘riding a bike’). In our work, we are focused on generating questions for static images of events, such as ‘fire’, ‘explosion’ or ‘snowing’, which have not yet been investigated in any of the above datasets. Visual Question Answering is a relatively new task where the system provides an answer to a question about the image content. The most notable, Visual Question Answering (VQA) (Antol et al., 2015), is an open-ended (free-form) dataset, in which both the questions and the answers are crowd-sourced, with workers prompted to ask a visually verifiable question which will ‘stump a smart robot’. Gao et al. (2015) used similar methodology to create a visual question answering dataset in Chinese. COCO-QA (CQA) (Ren et al., 2015), in contrast, does not use human-authored questions, but generates questions automatically from image captions of the MS COCO dataset by applying a set of transformation rules to generate the wh-question. The expected answers in CQA 1803 Figure 2: Example right and wrong questions for the task of VQG. are by design limited to objects, numbers, colors, or locations. A more in- depth analysis of VQA and CQA datasets will be presented in Section 3.1. In this work, we focus on questions which are interesting for a person to answer, not questions designed to evaluate computer vision. A recently published work on VQA, Visual7W (Zhu et al., 2016), establishes a grounding link on the object regions corresponding to the textual answer. This setup enables a system to answer a question with visual answers (in addition to textual answers). They collect a set of 327,939 7W multiple-choice QA pairs, where they point out that ‘where’, ‘when’ and ‘why’ questions often require high-level commonsense reasoning, going beyond spatial reasoning required for ‘which’ or ‘who’ questions. This is more in line with the type of questions that VQG captures, however, the majority of the questions in Visual7w are designed to be answerable by only the image, making them unnatural for asking a human. Thus, learning to generate the questions in VQA task is not a useful sub-task, as the intersection between VQG and any VQA questions is by definition minimal. Previous work on question generation from textual input has focused on two aspects: the grammaticality (Wolfe, 1976; Mitkov and Ha, 2003; Heilman and Smith, 2010) and the content focus of question generation, i.e., “what to ask about”. For the latter, several methods have been explored: (Becker et al., 2012) create fill-in-the-blank questions, (Mazidi and Nielsen, 2014) and (Lindberg et al., 2013) use manually constructed question templates, while (Labutov et al., 2015) use crowdsourcing to collect a set of templates and then rank the potentially relevant templates for the selected content. To our knowledge, neither a retrieval model nor a deep representation of textual input, presented in our work, have yet been used to generate questions. 3 Data Collection Methodology Task Definition: Given an image, the task is to generate a natural question which can potentially engage a human in starting a conversation. Questions that are visually verifiable, i.e., that can be answered by looking at only the image, are outside the scope of this task. For instance, in Figure 2, a question about the number of horses (appearing in the VQA dataset) or the color of the field is not of interest. Although in this paper we focus on asking a question about an image in isolation, adding prior context or history of conversation is the natural next step in this project. We collected the VQG questions by crowdsourcing the task on Amazon Mechanical Turk (AMT). We provide details on the prompt and the specific instructions for all the crowdsourcing tasks in this paper in the supplementary material. Our prompt was very successful at capturing nonliteral questions, as the good question in Figure 2 demonstrates. In the following Sections, we describe our process for selecting the images to be included in the VQG dataset. We start with images from MS COCO, which enables meaningful comparison with VQA and CQA questions. Given that it is more natural for people to ask questions about event-centric images, we explore sourcing eventful images from Flickr and from querying an image search engine. Each data source is represented by 5,000 images, with 5 questions per image. 3.1 V QGcoco−5000 and V QGF lickr−5000 As our first dataset, we collected VQG questions for a sample of images from the MS COCO dataset4. In order to enable comparisons with related datasets, we sampled 5,000 images of MS COCO which were also annotated by the CQA dataset (Ren et al., 2015) and by VQA (Antol et al., 2015). We name this dataset V QGcoco−5000. Table 1 shows a sample MS COCO image along with annotations in the various datasets. As the CQA questions are generated by rule application from captions, they are not always coherent. The VQA questions are written to evaluate the detailed visual understanding of a robot, so their questions are mainly visually grounded and literal. The table demonstrates how different VQG questions are from VQA and CQA questions. In Figure 3 we provide statistics for the various annotations on that portion of the MS COCO images which are represented in the V QGcoco−5000 4http://mscoco.org/ 1804 Dataset Annotations COCO - A man holding a box with a large chocolate covered donut. CQA - What is the man holding with a large chocolate-covered doughnut in it? VQA - Is this a large doughnut? VQG - Why is the donut so large? - Is that for a specific celebration? - Have you ever eaten a donut that large before? - Is that a big donut or a cake? - Where did you get that? Table 1: Dataset annotations on the above image. dataset. In Figure 3(a) we compare the percentage of object-mentions in each of the annotations. Object-mentions are words associated with the gold-annotated object boundary boxes5 as provided with the MS COCO dataset. Naturally, COCO captions (green bars) have the highest percentage of these literal objects. Since objectmentions are often the answer to VQA and CQA questions, those questions naturally contain objects less frequently. Hence, we see that VQG questions include the mention of more of those literal objects. Figure 3(b) shows that COCO captions have a larger vocabulary size, which reflects their longer and more descriptive sentences. VQG shows a relatively large vocabulary size as well, indicating greater diversity in question formulation than VQA and CQA. Moreover, Figure 3(c) shows that the verb part of speech is represented with high frequency in our dataset. Figure 3(d) depicts the percentage of abstract terms such as ‘think’ or ‘win’ in the vocabulary. Following Ferraro et al. (2015), we use a list of most common abstract terms in English (Vanderwende et al., 2015), and count all the other words except a set of function words as concrete. This figure supports our expectation that VQG covers more abstract concepts. Furthermore, Figure 3(e) shows inter-annotation textual similarity according to the BLEU metric (Papineni et al., 2002). Interestingly, VQG shows the highest interannotator textual similarity, which reflects on the existence of consensus among human for asking 5Note that MS COCO annotates only 91 object categories. a natural question, even for object-centric images like the ones in MS COCO. Figure 3: Comparison of various annotations on the MS COCO dataset. (a) Percentage of gold objects used in annotations. (b) Vocabulary size (c) Percentage of verb POS (d) Percentage of abstract terms (e) Inter-annotation textual similarity score. The MS COCO dataset is limited in terms of the concepts it covers, due to its pre-specified set of object categories. Word frequency in V QGcoco−5000 dataset, as demonstrated in Figure 4, bears this out, with the words ‘cat’ and ‘dog’ the fourth and fifth most frequent words in the dataset. Not shown in the frequency graph is that words such as ‘wedding’, ‘injured’, or ‘accident’ are at the very bottom of frequency ranking list. This observation motivated the collection of the V QGFlickr−5000 dataset, with images appearing as the middle photo in a story-full photo album (Huang et al., 2016) on Flickr6. The details about this dataset can be found in the supplementary material. 3.2 V QGBing−5000 To obtain a more representative visualization of specific event types, we queried a search engine7 with 1,200 event-centric query terms which were obtained as follows: we aggregated all ‘event’ and ‘process’ hyponyms in WordNet (Miller, 1995), 1,000 most frequent TimeBank events (Pustejovsky et al., 2003) and a set of manually curated 30 stereotypical events, from which we selected the top 1,200 queries based on Project Gutenberg word frequencies. For each query, we collected the first four to five images retrieved, for a total 6http://www.flickr.com 7https://datamarket.azure.com/dataset/ bing/search 1805 Figure 4: Frequency graph of top 40 words in V QGcoco−5000 dataset. Figure 5: Average annotation length of the three VQG datasets. of 5,000 images, having first used crowdsourcing to filter out images depicting graphics and cartoons. A similar word frequency analysis shows that the V QGBing−5000 dataset indeed contains more words asking about events: happen, work, cause appear in top 40 words, which was our aim in creating the Bing dataset. Statistics: Our three datasets together cover a wide range of visual concepts and events, totaling 15,000 images with 75,000 questions. Figure 5 draws the histogram of number of tokens in VQG questions, where the average question length is 6 tokens. Figure 6 visualizes the n-gram distribution (with n=6) of questions in the three VQG datasets8. Table 2 shows the statistics of the crowdsourcing task. 3.3 CaptionsBing−5000 The word frequencies of questions about the V QGBing−5000 dataset indicate that this dataset 8Please refer to our web page on http://research. microsoft.com/en-us/downloads to get a link to a dynamic visualization and statistics of all n-gram sequences. # all images 15,000 # questions per image 5 # all workers participated 308 Max # questions written by one worker 6,368 Average work time per worker (sec) 106.5 Median work time per worker (sec) 23.0 Average payment per question (cents) 6.0 Table 2: Statistics of crowdsourcing task, aggregating all three datasets. is substantially different from the MS COCO dataset. Human evaluation results of a recent work (Tran et al., 2016) further confirms the significant image captioning quality degradation on out-of-domain data. To further explore this difference, we crowdsourced 5 captions for each image in the V QGBing−5000 dataset using the same prompt as used to source the MS COCO captions. We call this new dataset CaptionsBing−5000. Table 3 shows the results of testing the state-of-the-art MSR captioning system on the CaptionsBing−5000 dataset as compared to the MS COCO dataset, measured by the standard BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) metrics. The wide gap in the results further confirms that indeed the V QGBing−5000 dataset covers a new class of images; we hope the availability of this new dataset will encourage including more diverse domains for image captioning. BLEU METEOR Bing MS COCO Bing MS COCO 0.101 0.291 0.151 0.247 Table 3: Image captioning results Together with this paper we are releasing an extended set of VQG dataset to the community. We hope that the availability of this dataset will encourage the research community to tackle more end-goal oriented vision & language tasks. 4 Models In this Section we present several generative and retrieval models for tackling the task of VQG. For all the forthcoming models we use the VGGNet (Simonyan and Zisserman, 2014) architecture for computing deep convolutional image features. We primarily use the 4096-dimensional output the last fully connected layer (fc7) as the input to the generative models. 1806 Figure 6: VQG N-gram sequences. ‘End’ token distinguishes natural ending with n-gram cut-off. Figure 7: Three different generative models for tackling the task of VQG. 4.1 Generative Models Figure 7 represents an overview of our three generative models. The MELM model (Fang et al., 2014) is a pipeline starting from a set of candidate word probabilities which are directly trained on images, which then goes through a maximum entropy (ME) language model. The MT model is a Sequence2Sequence translation model (Cho et al., 2014; Sutskever et al., 2014) which directly translates a description of an image into a question, where we used the MS COCO captions and CaptionsBing−5000 as the source of translation. These two models tended to generate less coherent sentences, details of which can be found in the supplementary material. We obtained the best results by using an end-to-end neural model, GRNN, as follows. Gated Recurrent Neural Network (GRNN): This generation model is based on the state-of-theart multimodal Recurrent Neural Network model used for image captioning (Devlin et al., 2015; Vinyals et al., 2015). First, we transform the fc7 vector to a 500-dimensional vector which serves as the initial recurrent state to a 500-dimensional Gated Recurrent Unit (GRU). We produce the output question one word at a time using the GRU, until we hit the end-of-sentence token. We train the GRU and the transformation matrix jointly, but we do not back-propagate the CNN due to the size of the training data. The neural network is trained using Stochastic Gradient Descent with early stopping, and decoded using a beam search of size 8. The vocabulary consists of all words seen 3 or more times in the training, which amounts to 1942 unique tokens in the full training set. Unknown words are mapped to to an <unk> token during training, but we do not allow the decoder to produce this token at test time. 4.2 Retrieval Methods Retrieval models use the caption of a nearest neighbor training image to label the test image (Hodosh et al., 2013; Devlin et al., 2015; Farhadi et al., 2010; Ordonez et al., 2011). For the task of image captioning, it has been shown that up to 80% of the captions generated at test time by a near state-of-the-art generation approach (Vinyals et al., 2015) were exactly identical to the training set captions, which suggests that reusing training annotations can achieve good results. Moreover, basic nearest neighbor approaches to image captioning on the MS COCO dataset are shown to outperform generation models according to automatic metrics (Devlin et al., 2015). The performance of retrieval models of course depends on the diversity of the dataset. We implemented several retrieval models customized for the task of VQG. As the first step, we compute K nearest neighbor images for each test image using the fc7 features to get a candidate pool. We obtained the most competitive results by setting K dynamically, as opposed to the earlier 1807 Q. Explosion Hurricane Rain Cloud Car Accident Human - What caused this explosion? - Was this explosion an accident? - What caused the damage to this city? - What happened to this place? - Are those rain clouds? - Did it rain? - Did the drivers of this accident live through it? - How fast were they going? GRNN - How much did the fire cost? - What is being burned here? - What happened to the city? - What caused the fall? - What kind of clouds are these? - Was there a bad storm? - How did the car crash? - What happened to the trailer? KNN - What caused this fire? - What state was this earthquake in? - Did it rain? - Was anybody hurt in this accident? Caption - A train with smoke coming from it. - A pile of dirt. - Some clouds in a cloudy day. - A man standing next to a motorcycle. Table 4: Sample generations by different systems on V QGbing−5000, in order: Humanconsensus and Humanrandom, GRNNbing and GRNNall, KNN+minbleu−all, MSR captions. Q is the query-term. works which fix K throughout the testing. We observed that candidate images beyond a certain distance made the pool noisy, hence, we establish a parameter called max-distance which is an upper bound for including a neighbor image in the pool. Moreover, our experiments showed that if there exists a very similar image to the test image, the candidate pool can be ignored and that test image should become the only candidate9. For addressing this, we set a min-distance parameter. All these parameters were tuned on the corresponding validation sets using the Smoothed-BLEU (Lin and Och, 2004) metric against the human reference questions. Given that each image in the pool has five questions, we define the one-best question to be the question with the highest semantic similarity10 to the other four questions. This results in a pool of K candidate questions. The following settings were used for our final retrieval models: – 1-NN: Set K=1, which retrieves the closest image and emits its one-best. – K-NN+min: Set K=30 with max-distance = 0.35, and min-distance = 0.1. Among the 30 9At test time, the frequency of finding a train set image with distance ≤0.1 is 7.68%, 8.4% and 3.0% in COCO, Flickr and Bing datasets respectively. 10We use BLEU to compute textual similarity. This process eliminates outlier questions per image. candidate questions (one-best of each image), find the question with the highest similarity to the rest of the pool and emit that: we compute the textual similarity according the two metrics, SmoothedBLEU and Average-Word2Vec (gensim)11. Table 4 shows a few example images along with the generations of our best performing systems. For more examples please refer to the web page of the project. 5 Evaluation While in VQG the set of possible questions is not limited, there is consensus among the natural questions (discussed in Section 3.1) which enables meaningful evaluation. Although human evaluation is the ideal form of evaluation, it is important to find an automatic metric that strongly correlates with human judgment in order to benchmark progress on the task. 5.1 Human Evaluation The quality of the evaluation is in part determined by how the evaluation is presented. For instance, 11Average-Word2Vec refers to the sentence-level textual similarity metric where we compute the cosine similarity between two sentences by averaging their word-level Word2Vec (Mikolov et al., 2013) vector representations. Here we use the GenSim software framework ( ˇReh˚uˇrek and Sojka, 2010). 1808 it is important for the human judges to see various system hypotheses at the same time in order to give a calibrated rating. We crowdsourced our human evaluation on AMT, asking three crowd workers to each rate the quality of candidate questions on a three-point semantic scale. 5.2 Automatic Evaluation The goal of automatic evaluation is to measure the similarity of system-generated question hypotheses and the crowdsourced question references. To capture n-gram overlap and textual similarity between hypotheses and references, we use standard Machine Translation metrics, BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014). We use BLEU with equal weights up to 4-grams and default setting of METEOR version 1.5. Additionally we use ∆BLEU (Galley et al., 2015) which is specifically tailored towards generation tasks with diverse references, such as conversations. ∆BLEU requires rating per reference, distinguishing between the quality of the references. For this purpose, we crowd-sourced three human ratings (on a scale of 1-3) per reference and used the majority rating. The pairwise correlational analysis of human and automatic metrics is presented in Table 6, where we report on Pearson’s r, Spearman’s ρ and Kendall’s τ correlation coefficients. As this table reveals, ∆BLEU strongly correlates with human judgment and we suggest it as the main evaluation metric for testing a VQG system. It is important to note that BLEU is also very competitive with ∆BLEU, showing strong correlations with human judgment. Hence, we recommend using BLEU for any further benchmarking and optimization purposes. BLEU can also be used as a proxy for ∆BLEU for evaluation purposes whenever rating per reference are not available. 5.3 Results In this section, we present the human and automatic metric evaluation results of the models introduced earlier. We randomly divided each VQG5000 dataset into train (50%), val (25%) and test (25%) sets. In order to shed some light on differences between our three datasets, we present the evaluation results separately on each dataset in Table 5. Each model (Section 4.2) is once trained on all train sets, and once trained only on its corresponding train set (represented as X in the results table). For quality control and further insight on the task, we include two human annotations among our models: ‘Humanconsensus’ (the same as one-best) which indicates the consensus human annotation on the test image and ‘Humanrandom’ which is a randomly chosen annotation among the five human annotations. It is quite interesting to see that among the human annotations, Humanconsensus achieves consistently higher scores than Humanrandom. This further verifies that there is indeed a common intuition about what is the most natural question to ask about a given image. As the results of human evaluation in Table 5 shows, GRNNall performs the best as compared with all the other models in 2/3 of runs. All the models achieve their best score on V QGCOCO−5000, which was expected given the less diverse set of images. Using automatic metrics, the GRNNX model outperforms other models according to all three metrics on the V QGBing−5000 dataset. Among retrieval models, the most competitive is K-NN+min bleu all, which performs the best on V QGCOCO−5000 and V QGFlickr−5000 datasets according to BLEU and ∆BLEU score. This further confirms our effective retrieval methodology for including min-distance and n-gram overlap similarity measures. Furthermore, the boost from 1-NN to K-NN models is considerable according to both human and automatic metrics. It is important to note that none of the retrieval models beat the GRNN model on the Bing dataset. This additionally shows that our Bing dataset is in fact more demanding, making it a meaningful challenge for the community. 6 Discussion We introduced the novel task of ‘Visual Question Generation’, where given an image, the system is tasked with asking a natural question. We provide three distinct datasets, each covering a variety of images. The most challenging is the Bing dataset, requiring systems to generate questions with event-centric concepts such as ‘cause’, ‘event’, ‘happen’, etc., from the visual input. Furthermore, we show that our Bing dataset presents challenging images to the state-of-the-art captioning systems. We encourage the community to report their system results on the Bing test dataset and according to the ∆BLEU automatic metric. All the datasets will be released to the public12. This work focuses on developing the capabil12Please find Visual Question Generation under http:// research.microsoft.com/en-us/downloads. 1809 Humanconsensus Humanrandom GRNNX GRNNall 1-NNbleu−X 1-NNgensim−X K-NN+minbleu−X K-NN+mingensim−X 1-NNbleu−all 1-NNgensim−all K-NN+minbleu−all K-NN+mingensim−all Human Evaluation Bing 2.49 2.38 1.35 1.76 1.72 1.72 1.69 1.57 1.72 1.73 1.75 1.58 COCO 2.49 2.38 1.66 1.94 1.81 1.82 1.88 1.64 1.82 1.82 1.96 1.74 Flickr 2.34 2.26 1.24 1.57 1.44 1.44 1.54 1.28 1.46 1.46 1.52 1.30 Automatic Evaluation BLEU Bing 87.1 83.7 12.3 11.1 9.0 9.0 11.2 7.9 9.0 9.0 11.8 7.9 COCO 86.0 83.5 13.9 14.2 11.0 11.0 19.1 11.5 10.7 10.7 19.2 11.2 Flickr 84.4 83.6 9.9 9.9 7.4 7.4 10.9 5.9 7.6 7.6 11.7 5.8 MET. Bing 62.2 58.8 16.2 15.8 14.7 14.7 15.4 14.7 14.7 14.7 15.5 14.7 COCO 60.8 58.3 18.5 18.5 16.2 16.2 19.7 17,4 15.9 15.9 19.5 17.5 Flickr 59.9 58.6 14.3 14.9 12.3 12.3 13.6 12.6 12.6 12.6 14.6 13.0 ∆BLEU Bing 63.38 57.25 11.6 10.8 8.28 8.28 10.24 7.11 8.43 8.43 11.01 7.59 COCO 60.81 56.79 12.45 12.46 9.85 9.85 16.14 9.96 9.78 9.78 16.29 9.96 Flickr 62.37 57.34 9.36 9.55 6.47 6.47 9.49 5.37 6.73 6.73 9.8 5.26 Table 5: Results of evaluating various models according to different metrics. X represents training on the corresponding dataset in the row. Human score per model is computed by averaging human score across multiple images, where human score per image is the median rating across the three raters. METEOR BLEU ∆BLEU r 0.916 (4.8e-27) 0.915 (4.6e-27) 0.915 (5.8e-27) ρ 0.628 (1.5e-08) 0.67 (7.0e-10) 0.702 (5.0e-11) τ 0.476 (1.6e-08) 0.51 (7.9e-10) 0.557 (3.5e-11) Table 6: Correlations of automatic metrics against human judgments, with p-values in parentheses. ity to ask relevant and to-the-point questions, a key intelligent behavior that an AI system should demonstrate. We believe that VQG is one step towards building such a system, where an engaging question can naturally start a conversation. To continue progress on this task, it is possible to increase the size of the training data, but we also expect to develop models that will learn to generalize to unseen concepts. For instance, consider the examples of system errors in Table 7, where visual features can be enough for detecting the specific set of objects in each image, but the system cannot make sense of the combination of previously unseen concepts. Another natural future extension of this work is to include question generation within a conversational system (Sordoni et al., 2015; Li et al., 2016), where the context and conversation history affect the types of questions being asked. Human - How long did it take to make that ice sculpture? - Is the dog looking to take a shower? GRNN - How long has he been hiking? - Is this in a hotel room? KNN - How deep was the snow? - Do you enjoy the light in this bathroom? Table 7: Examples of errors in generation. The rows are Humanconsensus, GRNNall, and KNN+minbleu−all. Acknowledgment We would like to thank the anonymous reviewers for their invaluable comments. We thank Larry Zitnick and Devi Parikh for their helpful discussions regarding this work, Rebecca Hanson for her great help in data collection, Michel Galley for his guidelines on evaluation, and Bill Dolan for his valuable feedback throughout this work. 1810 References Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2014. 2d human pose estimation: New benchmark and state of the art analysis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In International Conference on Computer Vision (ICCV). Lee Becker, Sumit Basu, and Lucy Vanderwende. 2012. Mind the gap: Learning to choose gaps for question generation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 742–751, Montr´eal, Canada, June. Association for Computational Linguistics. Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng. 2015. HICO: A benchmark for recognizing human-object interactions in images. In Proceedings of the IEEE International Conference on Computer Vision. Jianfu Chen, Polina Kuznetsova, David Warren, and Yejin Choi. 2015. D´ej`a image-captions: A corpus of expressive descriptions in repetition. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 504–514, Denver, Colorado, May–June. Association for Computational Linguistics. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 100–105, Beijing, China, July. Association for Computational Linguistics. Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2014. Long-term recurrent convolutional networks for visual recognition and description. CoRR, abs/1411.4389. Mark Everingham, Luc Gool, Christopher K. Williams, John Winn, and Andrew Zisserman. 2010. The pascal visual object classes (voc) challenge. Int. J. Comput. Vision, 88(2):303–338, June. Hao Fang, Saurabh Gupta, Forrest N. Iandola, Rupesh Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2014. From captions to visual concepts and back. CoRR, abs/1411.4952. Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: Generating sentences from images. In Proceedings of the 11th European Conference on Computer Vision: Part IV, ECCV’10, pages 15–29, Berlin, Heidelberg. Springer-Verlag. Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, and Margaret Mitchell. 2015. A survey of current datasets for vision and language research. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 207–213, Lisbon, Portugal, September. Association for Computational Linguistics. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 445–450, Beijing, China, July. Association for Computational Linguistics. Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? dataset and methods for multilingual image question answering. CoRR, abs/1505.05612. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617, Los Angeles, California, June. Association for Computational Linguistics. Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. J. Artif. Int. Res., 47(1):853–899, May. Ting-Hao Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross B. Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, and Margaret Mitchell. 2016. Visual storytelling. 1811 In Proceedings of NAACL 2016. Association for Computational Linguistics. Joseph Jordania. 2006. Who Asked the First Question? The Origins of Human Choral Singing, Intelligence, Language and Speech. Logos. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL ’04, Stroudsburg, PA, USA. Association for Computational Linguistics. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. CoRR, abs/1405.0312. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 105–114, Sofia, Bulgaria, August. Association for Computational Linguistics. Mateusz Malinowski and Mario Fritz. 2014. A multiworld approach to question answering about realworld scenes based on uncertain input. In Advances in Neural Information Processing Systems 27, pages 1682–1690. Karen Mazidi and Rodney D. Nielsen. 2014. Linguistic considerations in automatic question generation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 321–326, Baltimore, Maryland, June. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111– 3119. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41, November. Ruslan Mitkov and Le An Ha. 2003. Computer-aided generation of multiple-choice tests. In Jill Burstein and Claudia Leacock, editors, Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing, pages 17–22. Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Neural Information Processing Systems (NIPS). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. J. Pustejovsky, P. Hanks, R. Sauri, A. See, R. Gaizauskas, A. Setzer, D. Radev, B. Sundheim, D. Day, L. Ferro, and M. Lazo. 2003. The TIMEBANK corpus. In Proceedings of Corpus Linguistics 2003, pages 647–656, Lancaster, March. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45– 50, Valletta, Malta, May. ELRA. http://is. muni.cz/publication/884893/en. Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Question answering about images using visual semantic embeddings. In Deep Learning Workshop, ICML 2015. Marcus Rohrbach, Sikandar Amin, Mykhaylo Andriluka, and Bernt Schiele. 2012. A database for fine grained activity detection of cooking activities. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, IEEE, June. K. Simonyan and A. Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205, Denver, Colorado, May–June. Association for Computational Linguistics. 1812 Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 813 2014, Montreal, Quebec, Canada, pages 3104– 3112. Kenneth Tran, Xiaodong He, Lei Zhang, Jian Sun, Cornelia Carapcea, Chris Thrasher, Chris Buehler, and Chris Sienkiewicz. 2016. Rich image captioning in the wild. In Proceedings of Deep Vision Workshop at CVPR 2016. IEEE, June. Lucy Vanderwende, Arul Menezes, and Chris Quirk. 2015. An amr parser for english, french, german, spanish and japanese and a new amr-annotated corpus. Proceedings of NAACL 2015, June. Lucy Vanderwende. 2008. The importance of being important: Question generation. In In Workshop on the Question Generation Shared Task and Evaluation Challenge, Arlington, VA. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2015. Translating videos to natural language using deep recurrent neural networks. In Proceedings the 2015 Conference of the North American Chapter of the Association for Computational Linguistics – Human Language Technologies (NAACL HLT 2015), pages 1494–1504, Denver, Colorado, June. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition. John H Wolfe. 1976. Automatic question generation from text-an aid to independent study. In ACM SIGCUE Outlook, volume 10, pages 104–112. ACM. Yuanjun Xiong, Kai Zhu, Dahua Lin, and Xiaoou Tang. 2015. Recognize complex events from static images by fusing deep channels. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. Bangpeng Yao, Xiaoye Jiang, Aditya Khosla, Andy Lai Lin, Leonidas J. Guibas, and Li Fei-Fei. 2011a. Action recognition by learning bases of action attributes and parts. In International Conference on Computer Vision (ICCV), Barcelona, Spain, November. Bangpeng Yao, Xiaoye Jiang, Aditya Khosla, Andy Lai Lin, Leonidas J. Guibas, and Li Fei-Fei. 2011b. Human action recognition by learning bases of action attributes and parts. In International Conference on Computer Vision (ICCV), Barcelona, Spain, November. Yuke Zhu, Oliver Groth, Michael S. Bernstein, and Li Fei-Fei. 2016. Visual7w: Grounded question answering in images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, IEEE. 1813
2016
170
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1814–1824, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Physical Causality of Action Verbs in Grounded Language Understanding Qiaozi Gao† Malcolm Doering‡∗Shaohua Yang† Joyce Y. Chai† †Computer Science and Engineering, Michigan State University, East Lansing, MI, USA ‡Department of Systems Innovation, Osaka University, Toyonaka, Osaka, Japan {gaoqiaoz, yangshao, jchai}@msu.edu [email protected] Abstract Linguistics studies have shown that action verbs often denote some Change of State (CoS) as the result of an action. However, the causality of action verbs and its potential connection with the physical world has not been systematically explored. To address this limitation, this paper presents a study on physical causality of action verbs and their implied changes in the physical world. We first conducted a crowdsourcing experiment and identified eighteen categories of physical causality for action verbs. For a subset of these categories, we then defined a set of detectors that detect the corresponding change from visual perception of the physical environment. We further incorporated physical causality modeling and state detection in grounded language understanding. Our empirical studies have demonstrated the effectiveness of causality modeling in grounding language to perception. 1 Introduction Linguistics studies have shown that action verbs often denote some change of state (CoS) as the result of an action, where the change of state often involves an attribute of the direct object of the verb (Hovav and Levin, 2010). For example, the result of “slice a pizza” is that the state of the object (pizza) changes from one big piece to several smaller pieces. This change of state can be perceived from the physical world. In Artificial Intelligence (Russell and Norvig, 2010), decades of research on planning, for example, back to the early days of the STRIPS planner (Fikes and Nilsson, ∗This work was conducted at Michigan State University where the author received his MS degree. 1971), have defined action schemas to capture the change of state caused by a given action. Based on action schemas, planning algorithms can be applied to find a sequence of actions to achieve a goal state (Ghallab et al., 2004). The state of the physical world is a very important notion and changing the state becomes a driving force for agents’ actions. Thus, motivated by linguistic literature on action verbs and AI literature on action representations, in our view, modeling change of physical state for action verbs, in other words, physical causality, can better connect language to the physical world. Although this kind of physical causality has been described in linguistic studies (Hovav and Levin, 2010), a detailed account of potential causality that could be denoted by an action verb is lacking. For example, in VerbNet (Schuler, 2005) the semantic representation for various verbs may indicate that a change of state is involved, but it does not provide the specifics associated with the verb’s meaning (e.g., to what attribute of its patient the changes might occur). To address this limitation, we have conducted an empirical investigation on verb semantics from a new angle of how they may change the state of the physical world. As the first step in this investigation, we selected a set of action verbs from a cooking domain and conducted a crowd-sourcing study to examine the potential types of causality associated with these verbs. Motivated by linguistics studies on typology for gradable adjectives, which also have a notion of change along a scale (Dixon and Aikhenvald, 2006), we developed a set of eighteen main categories to characterize physical causality. We then defined a set of change-of-state detectors focusing on visual perception. We further applied two approaches, a knowledge-driven approach and a learning-based approach, to incorporate causality modeling in 1814 grounded language understanding. Our empirical results have demonstrated that both of these approaches achieve significantly better performance in grounding language to perception compared to previous approaches (Yang et al., 2016). 2 Related Work The notion of causality or causation has been explored in psychology, linguistics, and computational linguistics from a wide range of perspectives. For example, different types of causal relations such as causing, enabling, and preventing (Goldvarg and Johnson-Laird, 2001; Wolff and Song, 2003) have been studied extensively as well as their linguistic expressions (Wolff, 2003; Song and Wolff, 2003; Neeleman et al., 2012) and automated extraction of causal relations from text (Blanco et al., 2008; Mulkar-Mehta et al., 2011; Radinsky et al., 2012; Riaz and Girju, 2014). Different from these previous works, this paper focuses on the physical causality of action verbs, in other words, change of state in the physical world caused by action verbs as described in (Hovav and Levin, 2010). This is motivated by recent advances in computer vision, robotics, and grounding language to perception and action. A recent trend in computer vision has started looking into intermediate representations beyond lower-level visual features for action recognition, for example, by incorporating object affordances (Koppula et al., 2013) and causality between actions and objects (Fathi and Rehg, 2013). Fathi and Rehg (2013) have borken down detection of actions to detection of state changes from video frames. Yang and colleagues (2013; 2014) have developed an object segmentation and tracking method to detect state changes (or, in their terms, consequences of actions) for action recognition. More recently, Fire and Zhu (2015) have developed a framework to learn perceptual causal structures between actions and object statuses in videos. In the robotics community, as robots’ low-level control systems are often pre-programmed to handle (and thus execute) only primitive actions, a high-level language command will need to be translated to a sequence of primitive actions in order for the corresponding action to take place. To make such translation possible, previous works (She et al., 2014a; She et al., 2014b; Misra et al., 2015; She and Chai, 2016) explicitly model verbs with predicates describing the resulting states of actions. Their empirical evaluations have demonstrated how incorporating resulting states into verb representations can link language with underlying planning modules for robotic systems. These results have motivated a systematic investigation on modeling physical causality for action verbs. Although recent years have seen an increasing amount of work on grounding language to perception (Yu and Siskind, 2013; Walter et al., 2013; Liu et al., 2014; Naim et al., 2015; Liu and Chai, 2015), no previous work has investigated the link between physical causality denoted by action verbs and the change of state visually perceived. Our work here intends to address this limitation and examine whether the causality denoted by action verbs can provide top-down information to guide visual processing and improve grounded language understanding. 3 Modeling Physical Causality for Action Verbs 3.1 Linguistics Background on Action Verbs Verb semantics have been studied extensively in linguistics (Pustejovsky, 1991; Levin, 1993; Baker et al., 1998; Kingsbury and Palmer, 2002). Particularly, for action verbs (such as run, throw, cook), Hovav and Levin (Hovav and Levin, 2010) propose that they can be divided into two types: manner verbs that “specify as part of their meaning a manner of carrying out an action” (e.g., nibble, rub, scribble, sweep, flutter, laugh, run, swim), and result verbs that “specify the coming about of a result state” (e.g., clean, cover, empty, fill, chop, cut, melt, open, enter). Result verbs can be further classified into three categories: Change of State verbs, which denote a change of state for a property of the verb’s object (e.g. “to warm”); Inherently Directed Motion verbs, which denote movement along a path in relation to a landmark object (e.g. “to arrive”); and Incremental Theme verbs, which denote the incremental change of volume or area of the object (e.g. “to eat”) (Levin and Hovav, 2010). In this work, we mainly focus on result verbs. Unlike Hovav and Levin’s definition of Change of State verbs, we use the term change of state in a more general way such that the location, volume, and area of an object are part of its state. Previous linguistic studies have also shown that result verbs often specify movement along a scale (Hovav and Levin, 2010), i.e., they are 1815 verbs of scalar change. A scale is “a set of points on a particular dimension (e.g. height, temperature, cost)”. In the case of verbs, the dimension is an attribute of the object of the verb. For example, “John cooled the coffee” means that the temperature attribute of the object coffee has decreased. Kennedy and McNally give a very detailed description of scale structure and its variations (Kennedy and McNally, 2005). Interestingly, gradable adjectives also have their semantics defined in terms of a scale structure. Dixon and Aikhenvald have defined a typology for adjectives which include categories such as Dimension, Color, Physical Property, Quantification, and Position (Dixon and Aikhenvald, 2006). The connection between gradable adjectives and result verbs through scale structure motivates us to use the Dixon typology as a basis to define our categorization of causality for verbs. In summary, previous linguistic literature has provided abundant evidence and discussion on change of state for action verbs. It has also provided extensive knowledge on potential dimensions that can be used to categorize change of state as described in this paper. 3.2 A Crowd-sourcing Study Motivated by the above linguistic insight, we have conducted a pilot study to examine the feasibility of causality modeling using a small set of verbs which appear in the TACoS corpus (Regneri et al., 2013). This corpus is a collection of natural language descriptions of actions that occur in a set of cooking videos. This is an ideal dataset to start with since it contains mainly descriptions of physical actions. Possibly because most actions in the cooking domain are goal-directed, a majority of the verbs in TACoS denote results of actions (changes of state) which can be observed in the world. More specifically, we chose ten verbs (clean, rinse, wipe, cut, chop, mix, stir, add, open, shake)) based on the criteria that they occur relatively frequently in the corpus and take a variety of different objects as their patient. We paired each verb with three different objects in the role of patient. Nouns (e.g., cutting board, dish, counter, knife, hand, cucumber, beans, leek, eggs, water, break, bowl, etc.) were chosen based on the criteria that they represent objects dissimilar to each other, since we hypothesize that the change of state indicated by the verb will differ depending on the object’s features. Each verb-noun pair was presented to turkers via Amazon Mechanical Turk (AMT) and they were asked to describe (by text) the changes of state that occur to the object as a result of the verb. The descriptions were collected under two conditions: (1) without showing the corresponding video clips (so turkers would have to use their imagination of the physical situation) and (2) showing the corresponding video clips. For each condition and each verb-noun pair, we collected 30 turkers’ responses, which resulted in a total of 1800 natural language responses describing change of state. 3.3 Categorization of Change of State Based on Dixon and Aikhenvald’s typology for adjectives (Dixon and Aikhenvald, 2006) and turkers’ responses, we identified a categorization to characterize causality, as shown in Table 1. This categorization is also driven by the expectation that these attributes can be potentially recognized from the physical world by artificial agents. The first column specifies the type of state change and the second column specifies specific attributes related to the type. The third column specifies the particular value associated with the attribute, e.g., it could be a binary categorization on whether a change happens or not (i.e., changes), or a direction along a scale (i.e., increase/decrease), or a specific value (i.e., specific such as “five pieces”). In total, we have identified eighteen causality categories corresponding to eighteen attributes as shown in Table 1. An important motivation of modeling physical causality is to provide guidance for visual processing. Our hypothesis is that once a language description is given together with its corresponding visual scene, potential causality of verbs or verb-noun pairs can trigger some visual detectors associated with the scene. This can potentially improve grounded language understanding (e.g., grounding nouns to objects in the scene). Next we give a detailed account on these visual detectors and their role in grounded language understanding. 4 Visual Detectors based on Physical Causality The changes of state associated with the eighteen attributes can be detected from the physical world 1816 Type Attribute Attribute Value Dimension Size, length, volume Changes, increases, decreases, specific Shape Changes, specific (cylindrical, flat, etc.) Color/Texture Color Appear, disappear, changes, mix, separate, specific (green, red, etc.) Texture Changes, specific (slippery, frothy, etc.) Physical Property Weight Increase, decrease Flavor, smell Changes, intensifies, specific Solidity Liquefies, solidifies, specific Wetness Becomes wet(ter), dry(er) Visibility Appears, disappears Temperature Increases, decreases Containment Becomes filled, emptied, hollow Surface Integrity A hole or opening appears Quantification Number of pieces Increases, one becomes many, decreases, many become one Position Location Changes, enter/exit container, specific Occlusion Becomes covered, uncovered Attachment Becomes detached Presence No longer present, becomes present Orientation Changes, specific Table 1: Categorization of physical causality. Attribute Rule-based Detector Refined Rule-based Detector Attachment / NumberOfPieces Multiple object tracks merge into one, or one object track breaks into multiple. Multiple tracks merge into one. One track breaks into multiple. Presence / Visibility Object track appears or disappears. Object track appears. Object track disappears. Location Object’s final location is different from the initial location. Location shifts upwards. Location shifts downwards. Location shifts rightwards. Location shifts leftwards. Size Object’s x-axis length or y-axis length is different from the initial values. Object’s x-axis length increases. Object’s x-axis length decreases. Object’s y-axis length increases. Object’s y-axis length decreases. Table 2: Causality detectors applied to patient of a verb. using various sensors. In this paper, we only focus on attributes that can be detected by visual perception. More specifically, we chose the subset: Attachment, NumberOfPieces, Presence, Visibility, Location, Size. They are chosen because: 1) according to the pilot study, they are highly correlated with our selected verbs; and 2) they are relatively easy to be detected from vision. Corresponding to these causality attributes, we defined a set of rule-based detectors as shown in Table 2. These in fact are very simple detectors, which consist of four major detectors and a refined set that distinguishes directions of state change. These visual detectors are specifically applied to the potential objects that may serve as patient for a verb to identify whether certain changes of state occur to these objects in the visual scene. 5 Verb Causality in Grounded Language Understanding In this section, we demonstrate how verb causality modeling and visual detectors can be used together for grounded language understanding. As shown in Figure 1, given a video clip V of human action and a parallel sentence S describing the action, our goal is to ground different semantic roles of the verb (e.g., get) to objects in the video. This is similar to the grounded semantic role labeling task (Yang et al., 2016). Here, we focus on a set of four semantic roles {agent, patient, source, destination}. We also assume that we have object and hand tracking results from video data. Each object in the video is represented by a track, which is a series of bounding boxes across video frames. Thus, given a video clip and a parallel sentence, the task is to ground semantic roles of the verb λ1, λ2, . . . , λk to object (or hand) tracks γ1, γ2, . . . , γn, in the video.1 We applied two approaches to this problem. 1For manipulation actions, the agent is almost always one of the human’s hands (or both hands). So we constrain the grounding of the agent role to hand tracks, and constrain the grounding of the other roles to object tracks. 1817 Language descrip.on: The man gets a knife from the drawer. Verb: “get” Agent: ground to the hand in the green box Pa.ent: “knife”, ground to the object in the red box Source: “drawer”, ground to the object in the blue box Figure 1: Grounding semantic roles of the verb get in the sentence: the man gets a knife from the drawer. 5.1 Knowledge-driven Approach We intend to establish that the knowledge of physical causality for action verbs can be acquired directly from the crowd and such knowledge can be coupled with visual detectors for grounded language understanding. Acquiring Knowledge. To acquire knowledge of verb causality, we collected a larger dataset of causality annotations based on sentences from the TACoS Multilevel corpus (Rohrbach et al., 2014), through crowd-sourcing on Amazon Mechanical Turk. Annotators were shown a sentence containing a verb-patient pair (e.g., “The person chops the cucumber into slices on the cutting board”). And they were asked to annotate the change of state that occurred to the patient as a result of the verb by choosing up to three options from the 18 causality attributes. Each sentence was annotated by three different annotators. This dataset contains 4391 sentences, with 178 verbs, 260 nouns, and 1624 verb-noun pairs. After summarizing the annotations from three different annotators, each sentence is represented by a 18-dimension causality vector. In the vector, an element is 1 if at least two annotators labeled the corresponding causality attribute as true, 0 otherwise. For 83% of all the annotated sentences, at least one causality attribute was agreed on by at least two people. From the causality annotation data, we can extract a verb causality vector c(v) for each verb by averaging all causality vectors of the sentences that contain this verb v. Applying Knowledge. Since the collected causality knowledge was only for the patient, we first look at the grounding of patient. Given a sentence containing a verb v and its patient, we want to ground the patient to one of the object tracks in the video clip. Suppose we have the causality knowledge, i.e., c(v), for the verb. For each candidate track in the video, we can generate a causality detection vector d(γi), using the predefined causality detectors. A straightforward way is to ground the patient to the object track whose causality detection results has the best coherence with the causality knowledge of the verb. The coherence is measured by the cosine similarity between c(v) and d(γi).2 Since objects in other semantic roles often have relations with the patient during the action, once we have grounded the patient, we can use it as an anchor point to ground the other three semantic roles. To do this, we define two new detectors for grounding each role as shown in Table 3. These detectors are designed using some common sense knowledge, e.g., source is likely to be the initial location of the patient; destination is likely to be the final location of the patient; agent is likely to be the hand that touches the patient. With these new detectors, we simply ground a role to the object (or hand) track that has the largest number of positive detections from the corresponding detectors. It is worth noting that although currently we only acquired knowledge for verbs that appear in the cooking domain, the same approach can be extended to verbs in other domains. The detectors associated with attributes are expected to remain the same. The significance of this knowledgedriven method is that, once you have the causality knowledge of a verb, it can be directly applied to any domain without additional training. 5.2 Learning-based Approach Our second approach is based on learning from training data. A key requirement for this approach is the availability of annotated data where the arguments of a verb are already correctly grounded to the objects in the visual scene. Then we can learn the association between detected causality 2In the case that not every causality attribute has a corresponding detector, we need to first condense c(v) to the same dimensionality with d(γi). 1818 Semantic Role Rule-based Detector Source Patient track appears within its bounding box. Its track is overlapping with the patient track at the initial frame. Destination Patient track disappears within its bounding box. Its track is overlapping with the patient track at the final frame. Agent Its track is overlapping with the patient track when the patient track appears or disappears. Its track is overlapping with the patient track when the patient track starts moving or stops moving. Table 3: Causality detectors for grounding source, destination, and agent. attributes and verbs. We use Conditional Random Field (CRF) to model the semantic role grounding problem. In this approach, causality detection results are used as features in the model. An example CRF factor graph is shown in Figure 2. The structure of CRF graph is created based on the extracted semantic roles, which already abstracts away syntactic variations such as active/passive constructions. This CRF model is similar to the ones in (Tellex et al., 2011) and (Yang et al., 2016), where φ1, . . . , φ4 are binary random variables, indicating whether the grounding is correct. In the learning stage, we use the following objective function: p(Φ|λ1, . . . , λk, γ1, . . . , γk, v) = 1 Z Y i Ψi(φi, λi, γ1, . . . , γk, v) (1) where Φ is the binary random vector [φ1, . . . , φk], and v is the verb. Z is the normalization constant. Ψi is the potential function that takes the following log-linear form: Ψi(φi, λi, Γ, v) = exp X l wlfl(φi, λi, Γ, v) ! (2) where fl is a feature function, wl is feature weight to be learned, and Γ = [γ1, . . . , γk] are the groundings. In our model, we use the following features: 1. Joint features between a track label of γi and a word occurrence in λi. 2. Joint features between each of the causality detection results and a verb v. Causality detection includes all the detectors in Table 2 and Table 3. Note that the causality detectors Figure 2: The CRF factor graph of the sentence: the man gets a knife from the drawer. shown in Table 3 capture relations between groundings of different semantic roles. During learning, gradient ascent with L2 regularization is used for parameter learning. Compared to (Tellex et al., 2011) and (Yang et al., 2016), a key difference in our model is the incorporation of causality detectors. These previous works (Tellex et al., 2011; Yang et al., 2016) apply geometric features, for example, to capture relations, distance, and relative directions between grounding objects. These geometric features can be noisy. In our model, features based on causality detectors are motivated and informed by the underlying causality models for corresponding action verbs. In the inference step, we want to find the most probable groundings. Given a video clip and its parallel sentence, we fix the Φ to be true, and search for groundings γ1, . . . , γk that maximize the probability as in Equation 1. To reduce the search space we apply beam search to ground in the following order: patient, source, destination, agent. 5.3 Experiments and Results We conducted our experiments using the dataset from (Yang et al., 2016). This dataset was developed from a subset of the TACoS corpus (Regneri et al., 2013). It contains a set of video clips paired with natural language descriptions related to two cooking tasks “cutting cucumber” and “cutting bread”. Each task has 5 videos showing how different people perform the same task, and each of these videos was split into pairs of video clips and corresponding sentences. For each video clip, objects are annotated with bounding boxes, tracks, 1819 All take put get cut open wash slice rinse place peel remove # Instances 279 58 15 47 29 6 28 13 29 29 10 15 With Ground-truth Track Labels Label Matching 67.7 70.7 46.7 72.3 69.0 16.7 85.7 69.2 82.8 37.9 90.0 60.0 Yang et al., 2016 84.6 93.2 91.7 93.6 77.8 80.0 93.5 86.7 90.0 66.7 80.0 38.9 VC-Knowledge 89.6∗ 94.8 73.3 100∗ 93.1 83.3 100 92.3 96.6 58.6 90.0 73.3∗ VC-Learning 90.3∗ 94.8 86.7 100∗ 93.1 83.3 89.3 92.3 96.6 75.9 80.0 66.7∗ Without Track Labels Label Matching 9.0 12.1 13.3 2.1 10.3 16.7 3.6 7.7 10.3 10.3 20.0 6.7 Yang et al., 2016 24.5 11.9 8.3 17.0 50.0 10.0 29.0 40.0 40.0 0 60.0 11.1 VC-Knowledge 60.2∗ 82.8∗ 60.0∗ 87.2∗ 58.6 50.0 39.3 46.2 41.4 48.3∗ 10.0 40.0 VC-Learning 71.7∗ 91.4∗ 33.3 87.2∗ 72.4 83.3∗ 46.4 84.6∗ 51.7 65.5∗ 80.0 60.0∗ Table 4: Grounding accuracy on patient role Overall Agent Patient Source Destination Number of Instances 644 279 279 51 35 With Ground-truth Track Labels Label Matching 66.3 68.5 67.7 41.2 74.3 Yang et al., 2016 84.2 86.4 84.6 72.6 81.6 VC-Knowledge 86.8 89.3 89.6∗ 60.8 82.9 VC-Learning 88.2∗ 88.2 90.3∗ 76.5 88.6 Without Track Labels Label Matching 33.5 66.7 9.0 7.8 2.9 Yang et al., 2016 48.2 86.1 24.5 15.7 13.2 VC-Knowledge 69.9∗ 89.6 60.2∗ 45.1∗ 25.7 VC-Learning 75.0∗ 87.1 71.7∗ 41.2∗ 54.3∗ Table 5: Grounding accuracy on four semantic roles and labels (e.g. “cucumber, cutting board” etc). For each sentence, the semantic roles of a verb are extracted using Propbank (Kingsbury and Palmer, 2002) definitions and each of them is annotated with the ground truth groundings in terms of the object tracks in the corresponding video clip. We selected the 11 most frequent verbs (get, take, wash, cut, rinse, slice, place, peel, put, remove, open) and the 4 most frequent explicit semantic roles (agent, patient, source, destination) in this evaluation. In total, this dataset includes 977 pairs of video clips and corresponding sentences, and 1096 verb-patient occurrences. We compare our knowledge-driven approach (VC-Knowledge) and learning-based approach (VC-Learning) with the following two baselines. Label Matching. This method simply grounds the semantic role to the track whose label matches the word phrase. If there are multiple matching tracks, it will randomly choose one of them. If there is no matching track, it will randomly select one from all the tracks. Yang et al., 2016. This work studies grounded semantic role labeling. The evaluation data from this work is used in this paper. It is a natural baseline for comparison. To evaluate the learning-based approaches such as VC-Learning and (Yang, et al., 2016), 75% of video clips with corresponding sentences were randomly sampled as the training set. The remaining 25% were used as the test set. For approaches which do not need training such as Label Matching and VC-Knowledge, we used the same test set to report their results. The results of the patient role grounding for each verb are shown in Table 4. The results of grounding all four semantic roles are shown in Table 5. The scores in bold are statistically significant (p < 0.05) compared to the Label Matching method. The scores with an asterisk (∗) are statistically significant (p < 0.05) compared to (Yang et al., 2016). As it can be difficult to obtain labels for the track, especially when the vision system encounters novel objects, we further conducted several experiments assuming we do not know the labels for the object tracks. In this case, only geometric information of tracked objects is available. Table 4 and Table 5 also include these results. From the grounding results, we can see that the causality modeling has shown to be very effective in grounding semantic roles. First of all, both the knowledge-driven approach and the learningbased approach outperform the two baselines. In 1820 All take put get cut open wash slice rinse place peel remove VC-Knowledge 89.6 94.8 73.3 100 93.1 83.3 100 92.3 96.6 58.6 90.0 73.3 P-VC-Knowledge 89.9 96.6 73.3 100 96.6 66.7 100 92.3 96.6 65.5 90.0 60.0 Table 6: Grounding accuracy on patient role using predicted causality knowledge. particular, our knowledge-driven approach (VCKnowledge) even outperforms the trained model (Yang et al., 2016). Our learning-based approach (VC-Learning) achieves the best overall performance. In the learning-based approach, causality detection results can be seen as a set of intermediate visual features. The reason that our learning-based approach significantly outperforms the similar model in (Yang et al., 2016) is that the causality categorization provides a good guideline for designing intermediate visual features. These causality detectors focus on the changes of state of objects, which are more robust than the geometric features used in (Yang et al., 2016). In the setting of no object recognition labels, VC-Knowledge and VC-Learning also generate significantly better grounding accuracy than the two baselines. This once again demonstrates the advantage of using causality detection results as intermediate visual features. All these results illustrate the potential of causality modeling for grounded language understanding. The results in Table 5 also indicate that grounding source or destination is more difficult than grounding patient or agent. One reason could be that source and destination do not exhibit obvious change of state as a result of action, so their groundings usually depend on the correct grounding of other roles such as patient. Since automated tracking for this TACoS dataset is notably difficult due to the complexity of the scene and the lack of depth information, our current results are based on annotated tracks. But object tracking algorithms have made significant progress in recent years (Yang et al., 2013; Milan et al., 2014). We intend to apply our algorithms with automated tracking on real scenes in the future. 6 Causality Prediction for New Verbs While various methods can be used to acquire causality knowledge for verbs, it may be the case that during language grounding, we do not know the causality knowledge for every verb. Furthermore, manual annotation/acquisition of causality knowledge for all verbs can be time-consuming. In this section, we demonstrate that the existing causality knowledge for some seed verbs can be used to predict causality for new verbs of which we have no knowledge. We formulate the problem as follows. Suppose we have causality knowledge for a set of seed verbs as training data. Given a new verb, whose causality knowledge is not known, our goal is to predict the causality attributes associated with this new verb. Although the causality knowledge is unknown, it is easy to compute Distributional Semantic Models (DSM) for this verb. Then our goal is to find the causality vector c′ that maximizes arg max c′ p(c′|v), (3) where v is the DSM vector for the verb v. The usage of DSM vectors is based on our hypothesis that the textual context of a verb can reveal its possible causality information. For example, the contextual words “pieces” and “halves” may indicate the CoS attribute “NumberOfPieces” for the verb “cut”. We simplify the problem by assuming that the causality vector c′ takes binary values, and also assuming the independence between different causality attributes. Thus, we can formulate this task as a group of binary classification problems: predicting whether a particular causality attribute is positive or negative given the DSM vector of a verb. We apply logistic regression to train a separate classifier for each attribute. Specifically, for the features of a verb, we use the Distributional Memory (typeDM) (Baroni and Lenci, 2010) vector. The class label indicates whether the corresponding attribute is associated with the verb. In our experiment we chose six attributes to study: Attachment, NumberOfPieces, Presence, Visibility, Location, and Size. For each one of the eleven verbs in the grounding task, we predict its causality knowledge using classifiers trained on all other verbs (i.e., 177 verbs in training set). To evaluate the predicted causality vectors, we applied them in the knowledge-driven approach (PVC-Knowledge). Grounding results were compared with the same method using the causality knowledge collected via crowd-sourcing. Ta1821 ble 6 shows the grounding accuracy on the patient role for each verb. For most verbs, using the predicted knowledge achieves very similar performance compared to using the collected knowledge. The overall grounding accuracy of using the predicted knowledge on all four semantic roles is only 0.3% lower than using the collected knowledge. This result demonstrates that physical causality of action verbs, as part of verb semantics, can be learned through Distributional Semantics. 7 Conclusion This paper presents, to the best of our knowledge, the first attempt that explicitly models the physical causality of action verbs. We have applied causality modeling to the task of grounding semantic roles to the environment using two approaches: a knowledge-based approach and a learning-based approach. Our empirical evaluations have shown encouraging results for both approaches. When annotated data is available (in which semantic roles of verbs are grounded to physical objects), the learning-based approach, which learns the associations between verbs and causality detectors, achieves the best overall performance. On the other hand, the knowledge-based approach also achieves competitive performance (even better than previous learned models), without any training. The most exciting aspect about the knowledge-based approach is that causality knowledge for verbs can be acquired from humans (e.g., through crowd-sourcing) and generalized to novel verbs about which we have not yet acquired causality knowledge. In the future, we plan to build a resource for modeling physical causality for action verbs. As object recognition and tracking are undergoing significant advancements in the computer vision field, such a resource together with causality detectors can be immediately applied for any applications that require grounded language understanding. Acknowledgments This work was supported in part by the National Science Foundation (IIS-1208390 and IIS1617682) and DARPA under the SIMPLEX program through UCLA (N66001-15-C-4035). The authors would like to thank anonymous reviewers for valuable comments and suggestions. References Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of the 17th international conference on Computational linguistics-Volume 1, pages 86–90. Association for Computational Linguistics. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. Eduardo Blanco, Nuria Castell, and Dan I Moldovan. 2008. Causal relation extraction. In Proceedings of the Sixth International Language Resources and Evaluation (LREC’08). Robert MW Dixon and Alexandra Y Aikhenvald. 2006. Adjective Classes: A Cross-linguistic Typology. Explorations in Language and Space C. Oxford University Press. Alahoum Fathi and James M Rehg. 2013. Modeling actions through state changes. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 2579–2586. IEEE. Richard E. Fikes and Nils J. Nilsson. 1971. Strips: A new approach to the application of theorem proving to problem solving. In Proceedings of the 2Nd International Joint Conference on Artificial Intelligence, IJCAI’71, pages 608–620, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Amy Fire and Song-Chun Zhu. 2015. Learning perceptual causality from video. ACM Transactions on Intelligent Systems and Technology (TIST), 7(2):23. Malik Ghallab, Dana Nau, and Paolo Traverso. 2004. Automated planning: theory & practice. Elsevier. Eugenia Goldvarg and Philip N Johnson-Laird. 2001. Naive causality: A mental model theory of causal meaning and reasoning. Cognitive science, 25(4):565–610. Malka Rappaport Hovav and Beth Levin. 2010. Reflections on Manner / Result Complementarity. Lexical Semantics, Syntax, and Event Structure, pages 21–38. Christopher Kennedy and Louise McNally. 2005. Scale structure and the semantic typology of gradable predicates. Language, 81(2)(0094263):345– 381. Paul Kingsbury and Martha Palmer. 2002. From treebank to propbank. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC2002). Hema Swetha Koppula, Rudhir Gupta, and Ashutosh Saxena. 2013. Learning human activities and object affordances from rgb-d videos. The International Journal of Robotics Research, 32(8):951–970. 1822 Beth Levin and Malka Rappaport Hovav. 2010. Lexicalized scales and verbs of scalar change. In 46th Annual Meeting of the Chicago Linguistics Society. Beth Levin. 1993. English verb classes and alternations: A preliminary investigation. University of Chicago press. Changsong Liu and Joyce Y. Chai. 2015. Learning to mediate perceptual differences in situated human-robot dialogue. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI15), pages 2288–2294, Austin, TX. Changsong Liu, Lanbo She, Rui Fang, and Joyce Y. Chai. 2014. Probabilistic labeling for efficient referential grounding based on collaborative discourse. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 13–18, Baltimore, MD. Anton Milan, Stefan Roth, and Kaspar Schindler. 2014. Continuous energy minimization for multitarget tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(1):58–72. Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lexicon induction for high-level instructions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 992– 1002, Beijing, China, July. Association for Computational Linguistics. Rutu Mulkar-Mehta, Christopher Welty, Jerry R Hoobs, and Eduard Hovy. 2011. Using granularity concepts for discovering causal relations. In Proceedings of the FLAIRS conference. Iftekhar Naim, Young C. Song, Qiguang Liu, Liang Huang, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2015. Discriminative unsupervised alignment of natural language instructions with corresponding video segments. In Proceedings of NAACL HLT 2015, pages 164–174, Denver, Colorado, May–June. Association for Computational Linguistics. Ad Neeleman, Hans Van de Koot, et al. 2012. The linguistic expression of causation. The Theta System: Argument Structure at the Interface, page 20. J Pustejovsky. 1991. The syntax of event structure. Cognition, 41(1-3):47–81. Kira Radinsky, Sagie Davidovich, and Shaul Markovitch. 2012. Learning causality for news events prediction. In Proceedings of the 21st international conference on World Wide Web, pages 909–918. ACM. Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics (TACL), 1:25–36. Mehwish Riaz and Roxana Girju. 2014. In-depth exploitation of noun and verb semantics to identify causation in verb-noun pairs. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDial), page 161. Anna Rohrbach, Marcus Rohrbach, Wei Qiu, Annemarie Friedrich, Manfred Pinkal, and Bernt Schiele. 2014. Coherent multi-sentence video description with variable level of detail. In Pattern Recognition, pages 184–195. Springer. S. Russell and P. Norvig. 2010. Artificial Intelligence: A Modern Approach. Prentice Hall. Karin Kipper Schuler. 2005. VerbNet: A BroadCoverage, Comprehensive Verb Lexicon. Ph.D. thesis, University of Pennsylvania. Lanbo She and Joyce Y. Chai. 2016. Incremental acquisition of verb hypothesis space towards physical world interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany. Lanbo She, Yu Cheng, Joyce Chai, Yunyi Jia, Shaohua Yang, and Ning Xi. 2014a. Teaching robots new actions through natural language instructions. In ROMAN, 2014 IEEE, Edinburgh, UK, August. Lanbo She, Shaohua Yang, Yu Cheng, Yunyi Jia, Joyce Chai, and Ning Xi. 2014b. Back to the blocks world: Learning new actions through situated human-robot dialogue. In Proceedings of the SIGDIAL 2014 Conference, Philadelphia, US, June. Grace Song and Phillip Wolff. 2003. Linking perceptual properties to the linguistic expression of causation. Language, culture and mind, pages 237–250. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth J Teller, and Nicholas Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Association for the Advancement of Artificial Intelligence (AAAI). Matthew R Walter, Sachithra Hemachandra, Bianca Homberg, Stefanie Tellex, and Seth Teller. 2013. Learning semantic maps from natural language descriptions. In Robotics: Science and Systems. Phillip Wolff and Grace Song. 2003. Models of causation and the semantics of causal verbs. Cognitive Psychology, 47(3):276–332. Phillip Wolff. 2003. Direct causation in the linguistic coding and individuation of causal events. Cognition, 88(1):1–48. Yezhou Yang, Cornelia Fermuller, and Yiannis Aloimonos. 2013. Detection of manipulation action consequences (mac). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2563–2570. 1823 Yezhou Yang, Anupam Guha, C Fermuller, and Yiannis Aloimonos. 2014. A cognitive system for understanding human manipulation actions. Advances in Cognitive Sysytems, 3:67–86. Shaohua Yang, Qiaozi Gao, Changsong Liu, Caiming Xiong, Song-Chun Zhu, and Joyce Y. Chai. 2016. Grounded semantic role labeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics, San Diego, CA. Haonan Yu and Jeffrey Mark Siskind. 2013. Grounded language learning from video described with sentences. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 53–63. 1824
2016
171
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1825–1836, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Optimizing an Approximation of ROUGE – a Problem-Reduction Approach to Extractive Multi-Document Summarization Maxime Peyrard and Judith Eckle-Kohler Research Training Group AIPHES and UKP Lab Computer Science Department, Technische Universit¨at Darmstadt www.aiphes.tu-darmstadt.de, www.ukp.tu-darmstadt.de Abstract This paper presents a problem-reduction approach to extractive multi-document summarization: we propose a reduction to the problem of scoring individual sentences with their ROUGE scores based on supervised learning. For the summarization, we solve an optimization problem where the ROUGE score of the selected summary sentences is maximized. To this end, we derive an approximation of the ROUGE-N score of a set of sentences, and define a principled discrete optimization problem for sentence selection. Mathematical and empirical evidence suggests that the sentence selection step is solved almost exactly, thus reducing the problem to the sentence scoring task. We perform a detailed experimental evaluation on two DUC datasets to demonstrate the validity of our approach. 1 Introduction Multi-document summarization (MDS) is the task of constructing a summary from a topically related document collection. This paper focuses on the variant of extractive and generic MDS, which has been studied in detail for the news domain using available benchmark datasets from the Document Understanding Conference (DUC) (Over et al., 2007). Extractive MDS can be cast as a budgeted subset selection problem (McDonald, 2007; Lin and Bilmes, 2011) where the document collection is considered as a set of sentences and the task is to select a subset of the sentences under a length constraint. State-of-the-art and recent works in extractive MDS solve this discrete optimization problem using integer linear programming (ILP) or submodular function maximization (Gillick and Favre, 2009; Mogren et al., 2015; Li et al., 2013b; Kulesza and Taskar, 2012; Hong and Nenkova, 2014). The objective function that is maximized in the optimization step varies considerably in previous work. For instance, Yih et al. (2007) maximize the number of informative words, Gillick and Favre (2009) the coverage of particular concepts, and others maximize a notion of “summary worthiness”, while minimizing summary redundancy (Lin and Bilmes, 2011; K˚ageb¨ack et al., 2014). There are also multiple approaches which maximize the evaluation metric for system summaries itself based on supervised Machine Learning (ML). System summaries are commonly evaluated using ROUGE (Lin, 2004), a recall oriented metric that measures the n-gram overlap between a system summary and a set of human-written reference summaries. The benchmark datasets for MDS can be employed in two different ways for supervised learning of ROUGE scores: either by training a model that assigns ROUGE scores to individual textual units (e.g., sentences), or by performing structured output learning and directly maximizing the ROUGE scores of the created summaries (Nishikawa et al., 2014; Takamura and Okumura, 2010; Sipos et al., 2012). The latter approach suffers both from the limited amount of training data and from the higher complexity of the machine learning models. In contrast, supervised learning of ROUGE scores for individual sentences can be performed with simple regression models using hundreds of sentences as training instances, taken from a single pair of documents and reference summaries. Extractive MDS can leverage the ROUGE scores of individual sentences in various ways, in particular, as part of an optimization step. In our work, we follow the previously successful approaches to 1825 extractive MDS using discrete optimization, and make the following contributions: We provide a theoretical justification and empirical validation for using ROUGE scores of individual sentences as an optimization objective. Assuming that ROUGE scores of individual sentences have been estimated by a supervised learner, we derive an approximation of the ROUGE-N score for a set of sentences from the ROUGE-N scores of the individual sentences in the general case of N >= 1. We use our approximation to define a mathematically principled discrete optimization problem for sentence selection. We empirically evaluate our framework on two DUC datasets, demonstrating the validity of our approximation, as well as its ability to achieve competitive ROUGE scores in comparison to several strong baselines. Most importantly, the resulting framework reduces the MDS task to the problem of scoring individual sentences with their ROUGE scores. The overall summarization task is converted to two sequential tasks: (i) scoring single sentences, and (ii) selecting summary sentences by solving an optimization problem where the ROUGE score of the selected sentences is maximized. The optimization objective we propose almost exactly solves (ii), which we justify by providing both mathematical and empirical evidence. Hence, solving the whole problem of MDS is reduced to solving (i). The rest of this paper is structured as follows: in Section 2, we discuss related work. Section 3 presents our subset selection framework consisting of an approximation of the ROUGE score of a set of sentences, and a mathematically principled discrete optimization problem for sentence selection. We evaluate our framework in Section 4 and discuss the results in Section 5. Section 6 concludes. 2 Related Work Related to our approach is previous work in extractive MDS that (i) casts the summarization problem as budgeted subset selection, and (ii) employs supervised learning on MDS datasets to learn a scoring function for textual units. Budgeted Subset Selection Extractive MDS can be formulated as the problem of selecting a subset of textual units from a document collection such that the overall score of the created summary is maximal and a given length constraint is observed. The selection of textual units for the summary relies on their individual scores, assigned by a scoring function which represents aspects of their relevance for a summary. Often, sentences are considered as textual units. Simultaneously maximizing the relevance scores of the selected units and minimizing their pairwise redundancy given a length constraint is a global inference problem which can be solved using ILP (McDonald, 2007). Several state-of-the-art results in MDS have been obtained by using ILP to maximize the number of relevant concepts in the created summary while minimizing the pairwise similarity between the selected sentences (Gillick and Favre, 2009; Boudin et al., 2015; Woodsend and Lapata, 2012). Another way to formulate the problem of finding the best subset of textual units is to maximize a submodular function. Maximizing submodular functions is a general technique that uses a greedy optimization algorithm with a mathematical guarantee on optimality (Nemhauser and Wolsey, 1978). Performing summarization in the framework of submodularity is natural because summaries try to maximize the coverage of relevant units while minimizing redundancy (Lin and Bilmes, 2011). However, several different coverage and redundancy functions have been proposed (Lin and Bilmes, 2011; K˚ageb¨ack et al., 2014; Yin and Pei, 2015) recently, and there is not yet a clear consensus on which coverage function to maximize. Supervised Learning Supervised learning using datasets with reference summaries has already been employed in early work on summarization to classify sentences as summary-worthy or not (Kupiec et al., 1995; Aone et al., 1995). Learning a scoring function for various kinds of textual units has become especially popular in the context of global optimization: scores of textual units, learned from data, are fed into an ILP problem solver to find the subset of sentences with maximal overall score. For example, Yih et al. (2007) score each word in the document cluster based on frequency and position, Li et al. (2013b) learn bigram frequency in the reference summaries, and Hong and Nenkova (2014) learn word importance from a rich set of features. Closely related to our work are summarization approaches that include a supervised component 1826 which assigns ROUGE scores to individual sentences. For example, Ng et al. (2012), Li et al. (2013a) and Li et al. (2015) all use a regression model to learn ROUGE-2 scores for individual sentences, but use it in different ways for the summarization. While Ng et al. (2012) use the ROUGE scores of sentences in combination with the Maximal Marginal Relevance algorithm as a baseline approach, Li et al. (2013a) use the scores to select the top-ranked sentences for sentence compression and subsequent summarization. Li et al. (2015), in contrast, use the ROUGE scores to re-rank a set of sentences that are output by an optimization step. While learning ROUGE scores of textual units is widely used in summarization systems, the theoretical background on why this is useful has not been well studied yet. In our work, we present the mathematical and empirical justification for this common practice. In the next section, we start with the mathematical justification. 3 Content Selection Framework 3.1 Approximation of ROUGE-N Notation: Let S = {si|i ≤m} be a set of m sentences which constitute a system summary. We use ρN(S) or simply ρ(S) to denote the ROUGEN score of S. ROUGE-N evaluates the n-gram overlap between S and a set of reference summaries (Lin, 2004). Let S∗denote the reference summary and RN the number of n-gram tokens in S∗. RN is a function of the summary length in words, in particular, R1 is the target size of the summary in words. Finally, let FS(g) denote the number of times the n-gram type g occurs in S. For a single reference summary, ROUGE-N is computed as follows: ρ(S) = 1 RN X g∈S∗ min(FS(g), FS∗(g)) (1) For compactness, we use the following notation for any set of sentences X: CX,S∗(g) = min(FX(g), FS∗(g)) (2) CX,S∗(g) can be understood as the contribution of the n-gram g. ROUGE-N for a Pair of Sentences: Using this notation, the ROUGE-N score of a set of two sentences a and b can be written as: ρ(a ∪b) = 1 RN X g∈S∗ min(Ca∪b,S∗(g), FS∗(g)) (3) We observe that ρ(a ∪b) can be expressed as a function of the individual scores ρ(a) and ρ(b): ρ(a ∪b) = ρ(a) + ρ(b) −ϵ(a ∩b) (4) where ϵ(a ∩b) is an error correction term that discards overcounted n-grams from the sum of ρ(a) and ρ(b): ϵ(a ∩b) = 1 RN X g∈S∗ max(Ca,S∗(g)+Cb,S∗(g)−FS∗(g), 0) (5) A proof that this error correction is correct is given in appendix A.1. General Formulation of ROUGE-N: We can extend the previous formulation of ρ to sets of arbitrary cardinality using recursion. If ρ(S) is given for a set of sentences S, and a is a sentence then: ρ(S ∪a) = ρ(S) + ρ(a) −ϵ(S ∩a) (6) We prove in appendix A.1 that this formula is the ROUGE-N score of S ∪a. Another way to obtain ρ for an arbitrary set S is to adapt the principle of inclusion-exclusion: ρ(S) = m X i=1 ρ(si)+ m X k=2 (−1)k+1( X 1≤i1≤···≤ik≤m ϵ(k)(si1 ∩· · ·∩sik)) (7) This formula can be understood as adding up scores of individual sentences, but n-grams appearing in the intersection of two sentences might be overcounted. ϵ(2) is used to account for these n-grams. But now, n-grams in the intersection of three sentences might be undercounted and ϵ(3) is used to correct this. Each ϵ(k) contributes to improving the accuracy by refining the errors made by ϵ(k−1) for the n-grams appearing in the intersection of k sentences. When k = |S|, ρ(S) is exactly the ROUGE-N of S. A rigorous proof and details about ϵ(k) are provided in appendix A.2. 1827 Approximation of ROUGE-N for a Pair of Sentences: To find a valid approximation of ρ as defined in (7), we first consider the ρ(a ∪b) from equation (3) and then extend it to the general case. When maximizing ρ, scores for sentences are assumed to be given (e.g., estimated by a ML component). We still need to estimate ϵ(a ∩b), which means, according to (5), to estimate: X g∈S∗ max(Ca,S∗(g) + Cb,S∗(g) −FS∗(g), 0) (8) At inference time, neither S∗(the reference summary) nor FS∗(number of occurrences of n-grams in the reference summary) is known. At this point, we can observe that, similar as for sentence scoring, ϵ can be estimated via a supervised ML component. Such an ML model can easily be trained on the intersections of all sentence pairs in a given training dataset. Hence, we can assume that both the scores for individual sentences and the ϵ are learned empirically from data using ML. As a result, we have pushed all estimation steps into supervised ML components, which leaves the subset selection step fully principled. However, we found in our experiments that even a simple heuristic yields a decent approximation of ϵ. The heuristic uses the frequency freq(g) of an n-gram g observed in the source documents: X g∈S∗ max(Ca,S∗(g) + Cb,S∗(g) −FS∗(g), 0) ≈ X g∈a∩b 1[freq(g) ≥α] (9) The threshold α tells us which n-grams are likely to appear in the reference summary, and it is determined by grid-search on the training set. This is penalizing n-grams which appear twice and are likely to occur in the summary. It can be understood as a way of limiting redundancy. In practice, we used α = 0.3. However, we experimented with various values of the hyper-parameter α and found that its value has no significant impact as long as it is fairly small (< 0.5). Higher values will ignore too many redundant n-grams and the summary will have a high redundancy. RN is known since it is simply the number of n-gram tokens in the summaries. We end up with the following approximation for the pairwise case: ˜ρ(a ∪b) = ρ(a) + ρ(b) −˜ϵ(a ∪b), where ˜ϵ(a ∪b) = 1 RN X g∈a∩b 1[freq(g) ≥α] (10) General Approximation of ROUGE-N: Now, we can approximate ρ(S) for the general case defined by equation (7). We recall that ρ(S) contains the sum of ρ(si), the pairwise error terms ϵ(2)(si∩sj), the error terms of three sentences ϵ(3) and so on. We can restrict ourselves to the individual sentences and the pairwise error corrections. Indeed, the intersection between more than two sentences is often empty, and accounting for it does not improve the accuracy significantly, but greatly increases the computational cost. A formulation of ϵ in the case of two sentences has already been defined in (10). Thus, we have an approximation of the ROUGE-N function for any set of sentences that can be computed at inference time: ˜ρ(S) = n X i=1 ρ(si) − X a,b∈S,a̸=b ˜ϵ(a ∩b) (11) We empirically checked the validity of this approximation. For this, we sampled 1000 sets of sentences from source documents of DUC-2003 (sets of 2 to 5 sentences) and compared their ˜ρ score to the real ROUGE-N. We observe a pearson’s r correlation ≥0.97, which validates ˜ρ. 3.2 Discrete Optimization ˜ρ from equation (11) defines a set function that scores a set of sentences. The task of summarization is now to select the set S∗with maximal ˜ρ(S∗) under a length constraint. Submodularity: A submodular function is a set function obeying the diminishing returns property: ∀S ⊆T and a sentence a: F(S ∪a) −F(S) ≥ F(T ∪a) −F(T). Submodular functions are convenient because maximization under constraints can be done greedily with a guarantee of the optimality of the solution (Nemhauser et al., 1978). It has been shown that ROUGE-N is submodular (Lin and Bilmes, 2011) and it is easy to verify that ˜ρ is submodular as well (the proof is given in the supplemental material). We can therefore apply the greedy maximization algorithm to find a good set of sentences. This has the advantage of being straightforward and fast, however it does not necessarily find the optimal solution. ILP: A common way to solve a discrete optimization problem is to formulate it as an ILP. It 1828 maximizes (or minimizes) a linear objective function with some linear constraints where the variables are integers. ILP has been well studied and existing tools can efficiently retrieve the exact solution of an ILP problem. We observe that it is possible to formulate the maximization of ˜ρ(S) as an ILP. Let x be the binary vector whose i-th entry indicates whether sentence i is in the summary or not, ˜ρ(si) the scores of sentences, and K the length constraint. We pre-compute the symmetric matrix ˜P where ˜Pi,j = ˜ϵ(si ∩sj) and solve the following ILP: max( nP i=1 xi ∗˜ρ(si) −d 1 R P i≥j αi,j ∗˜Pi, j) Pn i=1 xi ∗len(si) ≤K ∀(i, j), αi,j −xi ≤0 ∀(i, j), αi,j −xj ≤0 ∀(i, j), xi + xj −αi,j ≤1 d is a damping factor that allows to account for approximation errors. When d = 0, the problem becomes the maximization of “summary worthiness” under a length constraint, with “summary worthiness” being defined by ρ(si). In practice, we used a value d = 0.9 because we observed that the learner tends to slightly overestimate the ROUGE-N scores of sentences. The mathematical derivation implies d = 1, however we can easily adjust for shifts in average scores of sentences from the estimation step by adjusting d. Another option would be to post-process the scores after the estimation step to fix the average and let d = 1 in the optimization step. Indeed, if d moves away from 1, we move away from the mathematical framework of ROUGE-N maximization. If d ̸= 0, it seems intuitive to interpret the second term as minimizing the summary redundancy, which is in accordance to previous works. However, in our framework, this term has a precise interpretation: it maximizes ROUGE-N scores up to the second order of precision, and the ROUGE-N formula itself already induces a notion of “summary worthiness” and redundancy, which we can empirically infer from data via supervised ML for sentence scoring, and a simple heuristic for sentence intersections. 4 Evaluation We perform three kinds of experiments in order to empirically evaluate our framework: first, we show that our proposed approximation is valid, then we analyze a basic supervised sentence scoring component, and finally we perform an extrinsic evaluation on end-to-end extractive MDS. In our experiments, we use the DUC datasets from 2002 and 2003 (DUC-02 and DUC-03). We use the variants of ROUGE identified by Owczarzak et al. (2012) as strongly correlating with human evaluation methods: ROUGE-2 recall with stemming and stopwords not removed (giving the best agreement with human evaluation), and ROUGE-1 recall (as the measure with the highest ability to identify the better summary in a pair of system summaries). For DUC-03, summaries are truncated to 100 words, and to 200 words for DUC-02. 1 The truncation is done automatically by ROUGE. 2 4.1 Framework Validity Given that sentences receive scores close to their individual ROUGE-N, we presented a function that approximates the ROUGE-N of sets of these sentences and proposed an optimization to find the best scoring set under a length constraint. To validate our framework empirically, we consider its upper-bound, which is obtained when our ILP/submodular optimizations use the real ROUGE-N scores of the individual sentences, calculated based on the reference summaries. We compare this upper bound to a greedy approach, which simply adds the best scoring sentences one by one to the subset until the length limit is reached, and to the real upper bound for extractive summarization which is determined by solving a maximum coverage problem for n-grams from the reference summary (as it was done by Takamura and Okumura (2010)). Table 1 shows the results. We observe that ILPR produces scores close to the reference, thus reducing the problem of extractive summarization to the task of sentence scoring, because the perfect scores induced near perfect extracted summaries in this framework. SBL-R seems less promising than ILP-R because it greedily maximizes a function which ILP-R exactly maximizes. Therefore, we continue our experiments in the following sec1In the official DUC-03 competitions, summaries of length 665 bytes were expected. Systems could produce different numbers of words. The variation in length has a noticeable impact on ROUGE recall scores. 2ROUGE-1.5.5 with the parameters: -n 2 -m -a -l 100 -x -c 95 -r 1000 -f A -p 0.5 -t 0. The length parameter becomes -l 200 for DUC-02. 1829 tions with ILP-R only. However, SBL-R offers a nice trade-off between performance and computation cost. The greedy optimization of SBL-R is noticeably faster than ILP-R. DUC-02 DUC-03 R1 R2 R1 R2 Greedy 0.597 0.414 0.391 0.148 SBL-R 0.630 0.484 0.424 0.160 ILP-R 0.644 0.495 0.447 0.178 Upper Bound 0.648 0.497 0.452 0.181 Table 1: Upper bound of our framework compared to extractive upper bound. In practice, the learner will not produce perfect scores. We experimentally validated that with learned scores converging to true scores, the extracted summary converges to the best extractive summary (w.r.t to ROUGE-N). To this end, we simulated approximate learners by artificially randomizing the true scores to end up with lists having various correlations with the true scores. We fed these scores to ILP-R and computed the ROUGE-1 of the generated summaries for an example topic from DUC-2003. Figure 1 displays the expected ROUGE-1 versus the performance of the artificial learner (correlation with true scores of sentences). We observe that, as the learner improves, the generated summaries approach the best ROUGE scoring summary. Figure 1: ROUGE-1 of summary against sentence scores correlation with true ROUGE-1 scores of sentences (d30003t from DUC-2003). 4.2 Sentence Scoring Now we look at the supervised learning component which learns ROUGE-N scores for individual sentences. We know that we can achieve an overall summary ROUGE-N score close to the upper bound, if a learner would be able to learn the scores perfectly. For better understanding the difficulty of the task of sentence scoring, we look at the correlation of the scores produced by a basic learner and the true scores given in a reference dataset. Model and Features From an existing summarization dataset (e.g. a DUC dataset), a training set can straightforwardly be extracted by annotating each sentence in the source documents with its ROUGE-N score. For each topic in the dataset, this yields a list of sentences and their target score. To support the claim that learning ROUGE scores for individual sentences is easier than solving the whole summarization task, it is sufficient to choose a basic learner with simple features and little in-domain training data (models are trained on one DUC dataset and evaluated on another). Specifically, we employ a support vector regression (SVR).3 We use only classical surface-level features to represent sentences (position, length, overlap with title) and combine them with frequency features. The latter include TF*IDF weighting of the terms (similar to Luhn (1958)), the sum of the frequency of the bi-grams in the sentence, as well as the sum of the document frequency (number of source documents in which the n-grams appear) of the terms and bi-grams in a sentence. We trained two models, R1 and R2 on DUC02 and DUC-03. For R1, the target score is the ROUGE-1 recall, while R2 learns ROUGE-2 recall. Correlation Analysis We evaluated our sentence scoring models R1 and R2 by calculating the correlation of the scores produced by R1 and R2 and the true scores given in the DUC-03 data. We compare both models to the true ROUGE-1 and ROUGE-2 scores. In addition, we calculated the correlation of the TF*IDF and LexRank scores, in order to understand how well they would fit into our framework (TF*IDF and LexRank are described in section 4.3). The results are displayed in Table 2. Even with a basic learner it is possible to learn scores that correlate well with the true ROUGE-N scores, which supports the claim that it is easier to learn scores for individual sentences than to solve the whole problem of summarization. This finding strongly supports our proposed reduction of the extractive MDS problem to the task of learning 3We use the implementation in scikit-learn (Pedregosa et al., 2011). 1830 with ROUGE-1 with ROUGE-2 Pearson’s r Kendall’s tau nDCG@15 Pearson’s r Kendall’s tau nDCG@15 TF*IDF 0.923 0.788 0.916 0.607 0.512 0.580 LexRank 0.210 0.120 0.534 0.286 0.178 0.379 model R1 0.940 0.813 0.951 0.653 0.545 0.693 model R2 0.729 0.496 0.891 0.743 0.576 0.752 Table 2: Correlation of different kinds of sentence scores and their true ROUGE-1 and ROUGE-2 scores. scores for individual sentences, which correlate well with their true ROUGE-N scores. We observe that TF*IDF correlates surprisingly well with the ROUGE-1 score, which indicates that we can expect a significant performance gain when feeding TF*IDF scores to our optimization framework. LexRank, on the other hand, orders sentences according to their centrality and does not look at individual sentences. Accordingly, we observe a low correlation with the true ROUGEN scores, and thus LexRank may not benefit from the optimization (which we confirmed in our experiments). Finally, we observe that there is significant room for improvement regarding ROUGE-2, as well as for Kendall’s tau in ROUGE-1 where a more sophisticated learner could produce scores that correlate better with the true scores. The higher the correlation of the sentence scores assigned by a learner and the true scores, the better the summary produced by the subsequent subset selection. 4.3 End-to-End Evaluation In our end-to-end evaluation on extractive MDS, we use the following baselines for comparison: • TF*IDF weighting: This simple heuristic was introduced by Luhn (1958). Each sentence receives a score from the TF*IDF of its terms. We trained IDFs (Inverse Document Frequencies) on a background corpus 4 to improve the original algorithm. • LexRank: Among other graph-based approaches to summarization (Mani and Bloedorn, 1997; Radev et al., 2000; Mihalcea, 2004), LexRank (Erkan and Radev, 2004) has become the most popular one. A similarity graph G(V, E) is constructed where V is the set of sentences and an edge eij is drawn between sentences vi and vj if and only if 4We used DBpedia long abstract: http://wiki.dbpedia.org/Downloads2015-04. the cosine similarity between them is above a given threshold. Sentences are scored according to their PageRank score in G. For our experiments, we use the implementation available in the sumy package.5 • ICSI: ICSI is a recent system that has been identified as one of the state-of-the-art systems by Hong et al. (2014). It is a global linear optimization framework that extracts a summary by solving a maximum coverage problem considering the most important concepts in the source documents. Concepts are identified as bi-grams and their importance is estimated via their frequency in the source documents. Boudin et al. (2015) released a Python implementation (ICSI sume) that we use in our experiments. • SFOUR: SFOUR is a structured prediction approach that trains an end-to-end system with a large-margin method to optimize a convex relaxation of ROUGE (Sipos et al., 2012). We use the publicly available implementation. 6 As described in the previous section, two models are trained: R1 and R2. We evaluate both of them in the end-to-end setup with and without our optimization. In the greedy version, sentences are added as long as the summary length is valid. We apply the optimization for sentence scoring models trained on ROUGE-1 and ROUGE-2 as well. The scoring models are trained on one dataset and evaluated on the other. For the ILP optimization, the damping factor can vary and leads to different performance. We report the best results among few variations. In order to speed-up the ILP step, we propose to limit the search space by only looking at the top K sentences7 (hence 5https://github.com/miso-belica/sumy 6http://www.cs.cornell.edu/˜rs/sfour/ 7We used K=50 and observed that a range from K=25 to K=70 yields a good trade-off between computation cost and performance. 1831 the importance of learning a correct ordering as well, like Kendall’s tau). This results in a massive speed-up and can even lead to better results as it prunes parts of the noise. Finally, we perform significance testing with the t-test to compare differences between two means.8 DUC-02 DUC-03 R1 R2 R1 R2 TFIDF 0.403 0.120 0.322 0.066 LexRank 0.446 0.158 0.354 0.077 ICSI 0.445 0.155 0.375 0.094 SFOUR 0.442 0.181 0.365 0.087 Greedy-R1 0.480 0.115 0.353 0.084 Greedy-R2 0.499 0.132 0.369 0.093 TFIDF+ILP 0.415 0.135 0.335 0.075 R1+ILP 0.509 0.187 0.378 0.101 R2+ILP 0.516* 0.192* 0.379 0.102 Table 3: Impact of the optimization step on sentence subset selection. Results Table 3 shows the results. The proposed optimization significantly and systematically improves TF*IDF performance as we expected from our analysis in the previous section. This result suggests that using only a frequency signal in source documents is enough to get high scoring summaries, which supports the common belief that frequency is one of the most useful features for generic news summarization. It also aligns well with the strong performance of ICSI, which combines an ILP step with frequency information as well. The optimization also significantly and systematically improves upon the greedy approach combined with our scoring models. Combining a SVR learner (SVR-1 and SVR-2) and our ILP-R produces results on par with ICSI and sometimes significantly better. SFOUR maximizes ROUGE in an end-to-end fashion, but is outperformed by our framework when using the same training data. The framework is able to reach a competitive performance even with a basic learner. These results again suggest that investigating better learners for sentence scoring might be promising in order to improve the quality of the summaries. We observe that the model trained on ROUGE2 is performing better than the model trained on ROUGE-1, although learning the ROUGE-2 scores seems to be harder than learning ROUGE-1 8The symbol * indicates that the difference compared to the previous best baseline is significant with p ≤0.05. scores (as shown in table 2). However, errors and approximations propagate less easily in ROUGE2, because the number of bi-grams in the intersection of two given sentences is far less. Hence we conclude that learning ROUGE-2 scores should be put into the focus of future work on improving sentence scoring. 5 Discussion This section discusses our contributions in a broader context. ROUGE Our subset selection framework performs the task of content selection, selecting an unordered set of textual units (sentences for now) for a system summary. The re-ordering of the sentences is left to a subsequent processing step, which accounts for aspects of discourse coherence and readability. While we justified our choice of ROUGE-1 recall and ROUGE-2 recall as optimization objectives by their strong correlation with human evaluation methods, ROUGE-N has also various drawbacks. In particular, it does not take into account the overall discourse coherence of a system summary (see the supplemental material for examples of summaries generated by our framework). From a broader perspective, systems that have high ROUGE scores can only be as good as ROUGE is, as a proxy for summary quality. However, as long as systems are evaluated with ROUGE, a natural approach is to develop systems that maximize it. Should novel automatic evaluation metrics be developed, our approach can still be applied, provided that the new metrics can be expressed as a function of the scores of individual sentences. Structured Learning Compared to MDS approaches using structured learning, our problemreduction has the important advantage that it considerably scales-up the available training data by working on sentences instead of documents/summaries pairs. Moreover, the task of sentence scoring is not dependent on arbitrary parameters such as the summary length which are inherently abstracted from the “summary worthiness” of individual textual units. Error Propagation The first step of the framework is left to a ML component which can only produce approximate scores. Empirical results (in Figure 1 and Table 2) suggest that even with an 1832 imperfect first step, the subsequent optimization is able to produce high scoring summaries. However, it might be insightful to study rigorously and in greater detail the propagation of errors induced by the first step. Other Metrics This work focused on maximizing ROUGE-N recall because it is a widely acknowledged automatic evaluation metric. ROUGE-N relies on reference summaries which forces us to perform an estimation step. In our framework, we use ML to estimate the individual scores of sentences without using reference summaries. However, Louis and Nenkova (2013) proposed several alternative evaluation metrics for system summaries which do not need reference summaries. They are based on the properties of the system summary and the source documents alone, and correlate well with human evaluation. Some of them can even reach a correlation with human evaluation similar to the ROUGE-2 recall. An example of such a metric is the JensenShannon Divergence (JSD) which is a symmetric smoothed version of the Kullback-Leibler divergence. Maximizing JSD can not be solved exactly with an ILP because it can not be factorized into individual sentences. However, applying an efficient greedy algorithm or maximizing a factorizable relaxation might produce strong results as well (for example, a simple greedy maximization of Kullback-Leibler divergence already yields good results (Haghighi and Vanderwende, 2009)). Future Work In this work, we developed a principled subset selection framework and empirically justified it. We focused on solving the second step of the framework while keeping the machine learning component as simple as possible. Essentially, our framework performs a modularization of the task of MDS, where all characteristics of the data and feature representations are pushed into a separate machine learning module – they should not affect the subsequent optimization step which remains fixed. The promising results we obtained for summarization with a basic learner (see Section 4.3) encourage future work on plugging in more sophisticated supervised learners in our framework. For example, we plan to incorporate lexicalsemantic information in the feature representation and leverage large-scale unsupervised pretraining. This direction is particularly promising because we have shown that we can expect significant performance gains for end-to-end MDS as the sentence scoring component improves. 6 Conclusion We proposed a problem-reduction approach to extractive MDS, which performs a reduction to the problem of scoring individual sentences with their ROUGE scores based on supervised learning. We defined a principled discrete optimization problem for sentence selection which relies on an approximation of ROUGE. We empirically checked the validity of the approach on standard datasets and observed that even with a basic learner the framework produces promising results. The code for our optimizers is available at github.com/ UKPLab/acl2016-optimizing-rouge. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1. References Chinatsu Aone, Mary Ellen Okurowski, James Gorlinsky, and Bjornar Larsen. 1995. A Trainable Summarizer with Knowledge Acquired from Robust NLP Techniques. In Inderjeet Mani and Mark T. Maybury, editors, Advances in Automatic Text Summarization, pages 68–73. MIT Press, Cambridge, MA, USA. Florian Boudin, Hugo Mougard, and Benot Favre. 2015. Concept-based Summarization using Integer Linear Programming: From Concept Pruning to Multiple Optimal Solutions. In Llus Mrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1914–1918, Lisbon, Portugal. G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based Lexical Centrality As Salience in Text Summarization. Journal of Artificial Intelligence Research, pages 457–479. Dan Gillick and Benoit Favre. 2009. A Scalable Global Model for Summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP ’09, pages 10– 18, Boulder, Colorado. 1833 Aria Haghighi and Lucy Vanderwende. 2009. Exploring Content Models for Multi-document Summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 362–370. Kai Hong and Ani Nenkova. 2014. Improving the Estimation of Word Importance for News MultiDocument Summarization. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 712–721, Gothenburg, Sweden. Kai Hong, John Conroy, benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A Repository of State of the Art and Competitive Baseline Summaries for Generic News Summarization. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 1608–1616, Reykjavik, Iceland. Mikael K˚ageb¨ack, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. 2014. Extractive Summarization using Continuous Vector Space Models. In Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pages 31–39, Gothenburg, Sweden. Alex Kulesza and Ben Taskar. 2012. Determinantal Point Processes for Machine Learning. Foundations and Trends in Machine Learning, 5:123–286. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A Trainable Document Summarizer. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 68–73, Seattle, Washington, USA. Association for Computing Machinery. Chen Li, Fei Liu, Fuliang Weng, and Yang Liu. 2013a. Document Summarization via Guided Sentence Compression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 490–500, Seattle, Washington, USA. Chen Li, Xian Qian, and Yang Liu. 2013b. Using Supervised Bigram-based ILP for Extractive Summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1004–1013, Sofia, Bulgaria. Chen Li, Yang Liu, and Lin Zhao. 2015. Improving Update Summarization via Supervised ILP and Sentence Reranking. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1317–1322, Denver, Colorado. Hui Lin and Jeff A. Bilmes. 2011. A Class of Submodular Functions for Document Summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 510–520, Portland, Oregon. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of summaries. In Proceedings of the Workshop on Text Summarization Branches Out at ACL, pages 74–81, Barcelona, Spain. Annie Louis and Ani Nenkova. 2013. Automatically Assessing Machine Summary Content Without a Gold Standard. Computational Linguistic, 39(2):267–300, June. Hans Peter Luhn. 1958. The Automatic Creation of Literature Abstracts. IBM Journal of Research Development, 2:159–165. Inderjeet Mani and Eric Bloedorn. 1997. Multidocument Summarization by Graph Search and Matching. In Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, pages 622–628, Providence, Rhode Island. AAAI Press. Ryan McDonald. 2007. A Study of Global Inference Algorithms in Multi-document Summarization. In Proceedings of the 29th European Conference on IR Research, pages 557–564, Rome, Italy. SpringerVerlag. Rada Mihalcea. 2004. Graph-based Ranking Algorithms for Sentence Extraction, Applied to Text Summarization. In Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, ACLdemo ’04, page 20, Barcelona, Spain. Olof Mogren, Mikael K˚ageb¨ack, and Devdatt Dubhashi. 2015. Extractive Summarization by Aggregating Multiple Similarities. In Recent Advances in Natural Language Processing, pages 451–457, Hissar, Bulgaria. George L. Nemhauser and Laurence A. Wolsey. 1978. Best Algorithms for Approximating the Maximum of a Submodular Set Function. Mathematics of Operations Research, 3(3):177–188. George L. Nemhauser, Laurence A. Wolsey, and Marschall L. Fisher. 1978. An Analysis of Approximations for Maximizing Submodular Set FunctionsI. Mathematical Programming, 14:265–294. Jun-Ping Ng, Praveen Bysani, Ziheng Lin, Min-Yen Kan, and Chew-Lim Tan. 2012. Exploiting Category-Specific Information for Multi-Document Summarization. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pages 2093–2108, Mumbai, India. Hitoshi Nishikawa, Kazuho Arita, Katsumi Tanaka, Tsutomu Hirao, Toshiro Makino, and Yoshihiro Matsuo. 2014. Learning to Generate Coherent Summary with Discriminative Hidden Semi-Markov Model. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1648–1659. 1834 Paul Over, Hoa Dang, and Donna Harman. 2007. DUC in Context. Information Processing and Management, 43(6):1506–1520. Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012. An Assessment of the Accuracy of Automatic Evaluation in Summarization. In Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization, pages 1–9, Montreal, Canada. Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based Summarization of Multiple Documents: Sentence Extraction, Utility-based Evaluation, and User Studies. In Proceedings of the NAACL-ANLP Workshop on Automatic Summarization, volume 4, pages 21–30, Seattle, Washington. Ruben Sipos, Pannaga Shivaswamy, and Thorsten Joachims. 2012. Large-margin Learning of Submodular Summarization Models. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 224–233, Avignon, France. Hiroya Takamura and Manabu Okumura. 2010. Learning to Generate Summary as Structured Output. In Proceedings of the 19th ACM international Conference on Information and Knowledge Management, pages 1437–1440. Association for Computing Machinery. Kristian Woodsend and Mirella Lapata. 2012. Multiple Aspect Summarization Using Integer Linear Programming. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, (EMNLP-CoNLL), pages 233–243, Jeju Island, Korea. Wen-tau Yih, Joshua Goodman, Lucy Vanderwende, and Hisami Suzuki. 2007. Multi-document Summarization by Maximizing Informative Content-words. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, pages 1776–1782, Hyderabad, India. Morgan Kaufmann Publishers Inc. Wenpeng Yin and Yulong Pei. 2015. Optimizing Sentence Modeling and Selection for Document Summarization. In Proceedings of the 24th International Conference on Artificial Intelligence, pages 1383– 1389, Buenos Aires, Argentina. AAAI Press. A Supplemental Material A.1 Recursive Expression of ROUGE-N Let S = {si|i ≤m} and T = {ti|i ≤l} be two sets of sentences, S∗the reference summary, and ρ(X) denote the ROUGE-N score of the set of sentences X. Assuming that ρ(S) and ρ(T) are given, we prove the following recursive formula: ρ(S ∪T) = ρ(S) + ρ(T) −ϵ(S ∩T) (12) For compactness, we use the following notation as well: CX,S∗(g) = min(FX(g), FS∗(g)) (13) Proof: We have the following definitions: ρ(S) = 1 RN X g∈S∗ ˜FS,S∗(g) (14) ρ(T) = 1 RN X g∈S∗ ˜FT,S∗(g) (15) ϵ(S ∩T) = 1 RN X g∈S∗ max(CS,S∗(g)+CT,S∗(g)−FS∗(g), 0) (16) And by definition of ROUGE, the formula of S ∪ T: ρ(S ∪T) = 1 RN X g∈S∗ min(FS∪T (g), FS∗(g)) (17) In order to prove equation (12), we have to show that the following equation holds: X g∈S∗ CS,S∗(g) + X g∈S∗ CT,S∗(g) − X g∈S∗ max(CS,S∗(g) + CT,S∗(g) −FS∗(g), 0) = X g∈S∗ min(FS∪T (g), FS∗(g)) (18) It is sufficient to show: ∀g ∈S∗, CS,S∗(g) + CT,S∗(g)− max(CS,S∗(g) + CT,S∗(g) −FS∗(g), 0) = min(FS∪T (g), FS∗(g)) (19) Let g ∈S∗be a n-gram. There are two possibilities: 1835 • FS(g) + FT (g) ≤FS∗(g): g appears less times in S ∪T than in the reference summary. It implies: min(FS∪T (g), FS∗(g)) = FS∪T (g) = FS(g) + FT (g). Moreover, all FX(g) are positive numbers by definition, and FS(g) ≤ FS∗(g) is equivalent to: CS,S∗(g) = min(FS(g), FS∗(g)) = FS(g). Similarly, we have: CT,S∗(g) = min(FT (g), FS∗(g)) = FT (g). Since max(CS,S∗(g)+CT,S∗(g)−FS∗(g), 0) = 0, the equation (19) holds in this case. • FS(g) + FT (g) ≥FS∗(g): g appears more frequently in S∪T than in the reference summary. It implies: min(FS∪T (g), FS∗(g)) = FS∗(g). Here we have: max(CS,S∗(g) + CT,S∗(g) −FS∗(g), 0) = CS,S∗(g) + CT,S∗(g) −FS∗(g), and it directly follows that equation (19) holds in this case as well. Equation (19) has been proved, which proves (12) as well. A.2 Expanded Expression of ROUGE-N Let S = {si|i ≤m} be a set of sentences and ρ(S) its ROUGE-N score. We prove the following formula: ρ(S) = m X i=1 ρ(si)+ m X k=2 (−1)k+1( X 1≤i1≤···≤ik≤m ϵ(k)(si1 ∩· · ·∩sik)) (20) Proof: Let g ∈S∗be a n-gram in the reference summary, and k ∈[1, m] the number of sentences in which it appears. Specifically, ∃{si1, · · · , sik}, ∀sij ∈{si1, . . . , sik}, g ∈sij. In order to prove the formula (20), we have to find an expression for the ϵ(k) that gives to g the correct contribution to the formula: 1 RN min(FS(g), FS∗(g)) (21) First, we observe that g does not appear in the terms that contain the intersection of more than k sentences. Specifically, ϵ(t) is not affected by g if t ≥k. However, g is affected by all the ϵ(t) for which t ≤k. Given that g appears in the sentences {si1, . . . , sij}, we can determine the score attributed to g by the previous ϵ(t) (t ≤k): S(k−1)(g) = X s∈{si1,...,sik} ρ(s)+ k X l=2 (−1)(l+1) X 1≤i1≤···≤il≤k ϵ(l)(si1 ∩· · · ∩sil)) (22) Now, g receives the correct contribution to the overall scores if ϵ(k) is defined as follows: ϵ(k)(si1 ∩· · · ∩sij) = 1 R X g∈si1∩···∩sij min(C{si1,...,sik}(g), FS∗(g)) −S(k−1)(g) (23) Indeed, with this expression for ϵ(k), the score of g is: S(k−1)(g) + 1 RN min(C{si1,...,sik}(g), FS∗(g)) −S(k−1)(g) (24) Which can be simplified to: 1 RN min(C{si1,...,sik}(g), FS∗(g)) (25) Since g appears only in the sentences {si1, . . . , sik}, ˜F{si1,...,sik}(g) = FS(g) and it follows that: 1 RN min(C{si1,...,sik}(g), FS∗(g)) = 1 RN min(FS(g), FS∗(g)) (26) This proves equation (20) because we observe that g will not be affected by any other terms. Every ϵ(t) for t ≤k including g is counted by S(k−1), and no other terms from ϵ(k) will affect g because all the other terms ϵ(k) should contain at least one sentence that is not in {si1, . . . , sik} and g would not belong to this intersection by definition. Finally, it has been proved in the appendix A.1 that for k = 2, ϵ(2) has a reduced form: ϵ(2)(sa ∩sb) = 1 RN X g∈S∗ max(Csa,S∗(g)+Csb,S∗(g)−FS∗(g), 0) (27) In the paper, we ignore the terms for k ≥2, therefore we do not search for a reduced form for these terms. 1836
2016
172
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1837–1847, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Phrase Structure Annotation and Parsing for Learner English Ryo Nagata Konan University 8-9-1 Okamoto, Higashinada Kobe, Hyogo 658-8501, Japan nagata-acl @hyogo-u.ac.jp. Keisuke Sakaguchi Johns Hopkins University 3400 North Charles Street Baltimore, MD, 21218, USA [email protected] Abstract There has been almost no work on phrase structure annotation and parsing specially designed for learner English despite the fact that they are useful for representing the structural characteristics of learner English. To address this problem, in this paper, we first propose a phrase structure annotation scheme for learner English and annotate two different learner corpora using it. Second, we show their usefulness, reporting on (a) inter-annotator agreement rate, (b) characteristic CFG rules in the corpora, and (c) parsing performance on them. In addition, we explore methods to improve phrase structure parsing for learner English (achieving an F-measure of 0.878). Finally, we release the full annotation guidelines, the annotated data, and the improved parser model for learner English to the public. 1 Introduction Learner corpora have been essential for NLP tasks related to learner language such as grammatical error correction. They are normally annotated with linguistic properties. In the beginning, attention was mainly focused on grammatical error annotation (Izumi et al., 2004; D´ıaz-Negrillo et al., 2009; Dale and Kilgarriff, 2011; Ng et al., 2013). Recently, it has been expanded to grammatical annotation — first, Part-Of-Speech (POS) tagging (D´ıaz-Negrillo et al., 2009; Nagata et al., 2011) and then syntactic annotation (Kepser et al., 2004; Dickinson and Ragheb, 2009; Ragheb and Dickinson, 2012; Ragheb and Dickinson, 2013); syntactic annotation for learner corpora is now intensively studied. Among a variety of studies, a series of work by Ragheb and Dickinson (Dickinson and Ragheb, 2009; Ragheb and Dickinson, 2012; Ragheb and Dickinson, 2013) is important in that they proposed a dependency annotation scheme, theoretically and empirically evaluated it, and revealed its theoretical problems, which gives a good starting point to those who wish to develop a new annotation scheme for learner corpora. Researchers including Foster (2004) and Ott and Ziai (2010) have even started using dependencyannotated learner corpora to develop dependency parsers for learner language. Although research on syntactic analysis for learner corpora has been making great progress as noted above, it is not yet complete. There are at least three limitations in the previous work: (i) as far as we are aware, there has been almost no work on phrase structure annotation specially designed for learner corpora; (ii) there are no publicly available learner corpora annotated with syntax; (iii) phrase structure parsing performance on learner English has not yet been reported. The first limitation is that there exists no phrase structure annotation scheme specially designed for learner English. As related work, Foster (2007a; 2007b) and Foster and Andersen (2009) propose a method for creating a pseudo-learner corpus by artificially generating errors in a native corpus with phrase structures. However, the resulting corpus does not capture various error patterns in learner English. Concerning the second limitation, a corpus greatly increases in value when it is available to the public as has been seen in other domains. Nevertheless, whether dependency or phrase structure, there seems to be no publicly available learner corpora annotated with syntax. The above two limitations cause the third one that phrase structure parsing performance on leaner English has not yet been reported. For this reason, Cahill (2015) demonstrates how ac1837 curately an existing parser performs on a pseudolearner corpus (section 23 of WSJ with errors artificially generated by Foster and Andersen (2009)’s method). Cahill et al. (2014) show the performance of a phrase structure parser augmented by self-training on students’ essays, many of which are presumably written by native speakers of English. Tetreault et al. (2010) partially show phrase structure parsing performance concerning preposition usage in learner English, concluding that it is effective in extracting features for preposition error correction. We need to reveal full parsing performance to be able to confirm that this is true for other syntactic categories and whether or not we should use phrase structure parsing to facilitate related tasks such as grammatical error correction and automated essay scoring. Here, we emphasize that phrase structure annotation has at least two advantages over dependency annotation1. First of all, it can directly encode information about word order. This is particularly important because learner corpora often contain errors in word order. For example, phrase structure parsing will reveal in which phrases errors in word order tend to occur as we will partly do in Sect. 3. Second of all, phrase structure rather abstractly represents syntactic information in terms of phrase-to-phrase relations. This means that the characteristics of learner English are represented by means of phrase-to-phrase relations (e.g., context free grammar (CFG) rules) or even as trees. Take as an example, one of the characteristic trees we found in the corpora we have created: S NP VP ϕ ADJP As we will discuss in Sect. 3, this tree suggests the mother tongue interference that the copula is not necessary in adjective predicates in certain languages. It would be linguistically interesting to reveal what CFG rules we need to add to, or subtract from, the native CFG rule set to be able to generate learner English. This is our primary motivation for this work although our other motivations include developing a parser for learner English. In view of this background, we address the above problems in this paper. Our contributions 1We are not arguing that phrase structure annotation is better than dependency annotation; they both have their own advantages, and thus both should be explored. are three-fold. First, we present a phrase structure annotation scheme for dealing with learner English consistently and reliably. For this, we propose five principles which can be applied to creating a novel annotation scheme for learner corpora. Second, we evaluate the usefulness of the annotation scheme by annotating learner corpora using it. To be precise, we report on inter-annotator agreement rate and characteristic CFG rules in the corpora, and take the first step to revealing phrase structure parsing performance on learner English. In addition, we explore methods to improve phrase structure parsing for learner English. Finally, we release the full annotation guidelines, the annotated corpora, and the improved parser model to the public. The rest of this paper is structured as follows. Sect. 2 describes the annotation scheme. Sect. 3 explores the annotated learner corpora. Sect. 4 evaluates parsing performance using it. 2 Phrase Structure Annotation Scheme 2.1 General Principles The annotation scheme is designed to consistently retrieve the structure in the target text that is closest to the writer’s intention. The following are the five principles we created to achieve it: (P1) Consistency-first principle (P2) Minimal rule set principle (P3) Locally superficially-oriented principle (P4) Minimum edit distance principle (P5) Intuition principle (P1) states that the most important thing in our annotation scheme is consistency. It is a trade-off between quality and quantity of information; detailed rules that are too complicated make annotation unmanageable yet they may bring out valuable information in learner corpora. Corpus annotation will be useless if it is inconsistent and unreliable no matter how precisely the rules can describe linguistic phenomena. Therefore, this principle favors consistency over completeness. Once we annotate a corpus consistently, we consider adding further detailed information to it. (P2) also has to do with consistency. The smaller the number of rules is, the easier it becomes to practice the rules. Considering this, if we have several candidates for describing a new linguistic phenomenon particular to learner English, we will choose the one that minimizes the number of modifications to the existing rule set. Note that 1838 this applies to the entire rule set; an addition of a rule may change the existing rule set. (P3) is used to determine the tag of a given token or phrase. As several researchers (D´ıazNegrillo et al., 2009; Dickinson and Ragheb, 2009; Nagata et al., 2011; Ragheb and Dickinson, 2012) point out, there are two ways of performing annotation, according to either superficial (morphological) or contextual (distributional) evidence. For example, in the sentence *My university life is enjoy., the word enjoy can be interpreted as a verb according to its morphological form or as an adjective (enjoyable) or a noun (enjoyment) according to its context. As the principle itself construes, our annotation scheme favors superficial evidence over distributional. This is because the interpretation of superficial evidence has much less ambiguity and (P3) can determine the tag of a given token by itself as seen in the above example. Distributional information is also partly encoded in our annotation scheme as we discuss in Subsect. 2.2. (P4) regulates how to reconstruct a correct form of a given sentence containing errors, which helps to determine its phrase structure. The problem is that often one can think of several candidates as possible corrections, which can become a source of inconsistency. (P4) gives a clear solution to this problem. It selects the one that minimizes the edit distance from the original sentence. Note that the edit distances for deletion, addition, and replacement are one, one, and two (deletion and addition), respectively in our definition. For the cases to which these four principles do not apply, the fifth and final principle (P5) allows annotators to use their intuition. It should be noted, however, that the five principles apply in the above order to avoid unnecessary inconsistency. 2.2 Annotation Rules Our annotation scheme is based on the POStagging and shallow-parsing annotation guidelines for learner English (Nagata et al., 2011), which in turn are based on the Penn Treebank II-style bracketing guidelines (Bies et al., 1995) (which will be referred to as PTB-II, hereafter). This naturally leads us to adopt the PTB-II tag set in ours; an exception is that we exclude the function tags and null elements from our present annotation scheme for annotation efficiency2. Accordingly, we revise 2We will most likely include them in a future version. the above guidelines to be able to describe phrase structures characteristic of learner English. The difficulties in syntactic annotation of learner English mainly lie in the fact that grammatical errors appear in learner English. Grammatical errors are often classified into three types as in Izumi et al. (2004): omission, insertion, and replacement type errors. In addition, we include other common error types (word order errors and fragments) in the error types to be able to describe learners’ characteristics more precisely. The following discuss how to deal with these five error types based on the five principles. 2.2.1 Omission Type Errors This type of error is an error where a necessary word is missing. For example, some kind of determiner is missing in the sentence *I am student. The existing annotation rules in PTB-II can handle most omission type errors. For instance, the PTB-II rule set would parse the above example as “(S (NP I) (VP am (NP student).)).” Note that syntactic tags for irrelevant parts are omitted in this example (and hereafter). A missing head word may be more problematic. Take as an example the sentence *I busy. where a verb is missing. The omission prevents the rule S →NP VP from applying to it. If we created a new rule for every head-omission with no limitation, it would undesirably increase the number of rules, which violates (P2). To handle head-omissions, we propose a function tag -ERR. It denotes that a head is missing in the phrase in question. The function tag makes it possible to apply the PTB-II rule set to sentences containing head-omissions as in: S NP I VP-ERR ϕ ADJP busy We need to reconstruct a correct form of a given sentence to determine whether or not a head word is missing. We use Principle (P4) for solving the problem as discussed in Sect. 2.1. For instance, the sentence *I want to happy. can be corrected as either I want to be happy. (edit distance is one; an addition of a word) or I want happiness. (three; two deletions and an addition). Following (P4), we select the first correction that minimizes the edit 1839 distance, resulting in: S NP I VP want VP TO to VP-ERR ϕ ADJP happy 2.2.2 Insertion Type Errors An insertion type error is an error where an extra word is used incorrectly. For example, the word about is an extra word in the sentence *She discussed about it. Insertion type errors are more problematic than omission type errors. It is not trivial how to annotate an erroneous extra word. On the one hand, one can argue that the extra word about is a preposition from its morphological form. On the other hand, one can also argue that it is not, because the verb discuss takes no preposition. As with this example, insertion type errors involve an ambiguity between superficial and distributional categories. Principles (P2) and (P3) together solve the ambiguity. According to (P3), one should always stick to the superficial evidence. For example, the extra word about should be tagged as a preposition. After this, PTB-II applies to the rest of the sentence, which satisfies (P2). As a result, one would obtain the parse “(S (NP She) (VP discussed (PP (IN about) (NP it)))).).” Insertion type errors pose a more vital problem in some cases. Take as an example the sentence *It makes me to happy. where the word to is erroneous. As before, one can rather straightforwardly tag it as a preposition, giving the POS sequence: *It/PRP makes/VBZ me/PRP to/TO happy/JJ ./. However, none of the PTB-II rules applies to the POS sequence TO JJ to make a phrase. This means that we have to create a new rule for such cases. There are at most three possibilities of grouping the words in question to make a phrase: to happy me to me to happy Intuitively, the first one seems to be the most acceptable. To be precise, the second one assumes a postposition, contrary to the English preposition system. The third one assumes a whole new rule generating a phrase from a personal pronoun, a preposition, and an adjective into a phrase. Thus, they cause significant modifications to PTBII, which violates (P2). In contrast, a preposition normally constitutes a prepositional phrase with another phrase (although not normally with an adjective phrase). Moreover, the first grouping would produce for the rest of the words the perfect phrase structure corresponding to the correct sentence without the preposition to: S NP me ? TO to ADJP happy which satisfies (P2) unlike the second and third ones. Accordingly, we select the first one. All we have to do now is to name the phrase to happy. There is an ambiguity between PP and ADJP, both of which can introduce the parent S. The fact that a preposition constitutes a prepositional phrase with another phrase leads us to select PP for the phrase. Furthermore, the tag of a phrase is normally determined by the POS of one of the immediate constituents, if any, that is entitled to be a head (i.e., the headedness). Considering this, we select PP in this case, which would give the parse to the entire sentence as follows:“(S (NP It) (VP makes (S (NP me) (PP (TO to) (ADJP happy)))).)” In summary, for insertion errors to which PTBII do not apply, we determine their phrase structures as follows: (i) intuitively group words into a phrase, minimizing the number of new rules added (it is often helpful to examine whether an existing rule is partially applicable to the words in question); (ii) name the resulting phrase by the POS of one of the immediate children that is entitled to be a head. 2.2.3 Replacement Type Errors A replacement type error is an error where a word should be replaced with another word. For example, in the sentence: *I often study English conversation., the verb study should be replaced with a more appropriate verb such as practice. To handle replacement type errors systematically, we introduce a concept called POS class, which is a grouping of POS categories defined as 1840 Class Members Noun NN, NNS, NNP, NNPS Verb VB, VBP, VBZ, VBD Adjective JJ, JJR, JJS Adverb RB, RBR, RBS Participle VBN, VBG Table 1: POS class. in Table 1; POS tags that are not shown in Table 1 form a POS class by itself. If the replacement in question is within the same POS class, it is annotated following Principles (P2) and (P3). Namely, the erroneous word is tagged according to its superficial form and the rest of the sentence is annotated by the original rule set, which avoids creating new rules3. If the replacement in question is from one POS class to another, we will need to take special care because of the ambiguity between superficial and distributional POS categories. For example, consider the sentence *I went to the see. where the word see is used as a noun, which is not allowed in the standard English, and the intention of the learner is likely to be sea (from the surrounding context). Thus, the word see is ambiguous between a verb and a noun in the sentence. To avoid the ambiguity, we adopt a two layerannotation scheme (D´ıaz-Negrillo et al., 2009; Nagata et al., 2011; Ragheb and Dickinson, 2012) to include both POSs. In our annotation scheme, we use a special tag (CE) for the replacement error and encode the two POSs as its attribute values as in CE:VB:NN. Then we can use the distributional POS tag to annotate the rest of the sentence. For example, the above example sentence would give a tree: S NP I VP VP went PP to NP the CE:VB:NN see 3This means that spelling and morphological errors are not directly coded in our annotation scheme as in He/PRP has/VBZ a/DT books/NNS. 2.2.4 Errors in Word Order Errors in word order often appear in learner English. A typical example would be the reverse of the subject-object order: *This place like my friends. (correctly, My friends like this place.). Principles (P2) and (P3) again play an important role in handling errors in word order. We first determine the POS tag of each word according to its morphological form. This is rather straightforward because errors in word order do not affect the morphological form. Then we determine the whole structure based on the resulting POS tags, following Principle (P2); if rules in PTB-II apply to the sentence in question, we parse it according to them just as in the above example sentence: “(S (NP This place) (VP like (NP my friends)).)” Even if any of the existing rules do not apply to a part of the sequence of the given POS tags, we stick to Principle (P3) as much as possible. In other words, we determine partial phrase structures according to the given POS sequence to which the existing rule set applies. Then we use the XP-ORD tag to put them together into a phrase. As an example, consider the sentence *I ate lunch was delicious. (correctly, The lunch I ate was delicious.). According to the superficial forms and local contexts, the phrase I ate lunch would form an S: S NP I VP ate lunch However, the relations of the S to the rest of the constituents are not clear. Here, we use the XPORD tag to combine the S with the rest together: S XP-ORD S NP I VP ate lunch VP was ADJP delicious 2.2.5 Fragments In learner corpora, sentences are sometimes incomplete. They are called fragments (e.g., missing main clause: Because I like it.). Fortunately, there exists already a tag for fragments in PTB-II: FRAG. Accordingly, we use 1841 it in our annotation scheme as well. For example, the above example would give the parse “(FRAG (SBAR Because (S (NP I (VP like (NP it))))).)” An exception is incomplete sentences which are defined as S in the bracketing guidelines for biomedical texts (Warner et al., 2012). We tag such incomplete sentences as S following the convention. For example, an adjective phrase can form an S (e.g., (S (ADVP Beautiful)!)). 2.2.6 Unknown Words and Phrases There are cases where one cannot tell the tag of a given word. We use the UK tag for such words (e.g., Everyone is don/UK). Even if its tag is unknown, it is somehow clear in some cases that the unknown word is the head word of the phrase just as in the above example. In that case, we use the UP tag so that it satisfies the rule about the headedness of a phrase we have introduced in Subsect. 2.2.2. Based on this, the above example would give the parse “(S (NP everyone) (VP is (UP (UK don ))).)” For a phrase whose head word is unknown due to some error(s) in it, we use the XP tag instead of the UP tag. As a special case of XP, we use the XP-ORD tag to denote the information that we cannot determine the head of the phrase because of an error in word order. 3 Corpus Annotation We selected the Konan-JIEM (KJ) learner corpus (Nagata et al., 2011) (beginning to intermediate levels) as our target data. It is manually annotated with POSs, chunks, and grammatical errors, which helps annotators to select correct tags. We also included in the target data a part of the essays in ICNALE (Ishikawa, 2011) consisting of a variety of learners (beginning to advanced levels4) in Asia (China, Indonesia, Japan, Korea, Taiwan, Thailand, Hong Kong, Singapore, Pakistan, Philippines). Table 2 shows the statistics on the two learner corpora. Two professional annotators5 participated in the annotation process. One of them first annotated the KJ data and double-checked the results. Between the first and second checks, we discussed 4The details about the proficiency levels are available in http://language.sakura.ne.jp/icnale/ about.html 5The annotators, whose mother tongue is Japanese, have a good command of English. They have engaged in corpus annotation including phrase structure annotation for around 20 years. the results with the annotator. We revised the annotation scheme based on the discussion which resulted in the present version. Then the second annotator annotated a part of the KJ data to evaluate the consistency between the two annotators. We took out 11 texts (955 tokens) as a development set. The second annotator annotated it using the revised annotation scheme where she consulted the first annotator if necessary. After this, we provided her with the differences between the results of the two annotators. Finally, the first annotator annotated the data in ICNALE while the second independently another part of the KJ data and a part of the ICNALE data (59 texts, 12,052 tokens in total), which were treated as a test set. Table 3 shows inter-annotator agreement measured in recall, precision, F-measure, complete match rate, and chance-corrected measure (Skjærholt, 2014). We used the EVALB tool6 with the Collins (1997)’s evaluation parameter where we regarded the annotation results of the first annotator as the gold standard set. We also used the syn-agreement tool7 to calculate chancecorrected measure. It turns out that the agreement is very high. Even in the test set, they achieve an F-measure of 0.928 and a chance-corrected measure of 0.982. This shows that our annotation scheme enabled the annotators to consistently recognize the phrase structures in the learner corpora in which grammatical errors frequently appear. The comparison between the results of the two annotators shows the major sources of the disagreements. One of them is annotation concerning adverbial phrases. In PTB-II, an adverbial phrase between the subject NP and the main verb is allowed to be a constituent of the VP (e.g., (S (NP I) (VP (ADVP often) go))) and also of the S (e.g., (S (NP I) (ADVP often) (VP go))). Another major source is the tag FRAG (fragments); the annotators disagreed on distinguishing between FRAG and S in some cases. The high agreement shows that the annotation scheme provides an effective way of consistently annotating learner corpora with phrase structures. However, one might argue that the annotation does not represent the characteristics of learner English well because it favors consistency (and rather simple annotation rules) over completeness. To see if the annotation results represent the 6http://nlp.cs.nyu.edu/evalb/ 7https://github.com/arnsholt/ syn-agreement 1842 Corpus # essays # sentences # tokens # errors/token # errors/sentence KJ 233 3,260 30,517 0.15 1.4 ICNALE 134 1,930 33,913 0.08 1.4 Table 2: Statistics on annotated learner corpora. Set R P F CMR CCM Development 0.981 0.981 0.981 0.913 0.995 Test 0.919 0.927 0.928 0.549 0.982 Table 3: Inter-annotator agreement measured in Recall (R), Precision (P), F-measure (F), Complete Match Rate (CMR), and Chance-Corrected Measure (CCM). characteristics of learner English, we extracted characteristic CFG rules from them. The basic idea is that we compare the CFG rules obtained from them with those from a native corpus (the Penn Treebank-II)8; we select as characteristic CFG rules those that often appear in the learner corpora and not in the native corpus. To formalize the extraction procedures, we denote a CFG rule and its conditional probability as A →B and p(B|A), respectively. Then we define the score for A →B by s(A →B) = log pL(B|A) pN(B|A) where we distinguish between learner and native corpora by the subscripts L and N, respectively. We estimate p(B|A) by expected likelihood estimation. Note that we remove the function tags to reduce the differences in the syntactic tags in both corpora when we calculate the score. Table 4 shows the top 10 characteristic CFG rules sorted in descending and ascending order according to their scores, which correspond to overused and underused rules in the learner corpora, respectively. Note that Table 4 excludes rules consisting of only terminal and/or preterminal symbols to focus on the structural characteristics. Also, it excludes rules containing a Quantifier Phrase (QP; e.g., (NP (QP 100 million) dollars)), which frequently appear and is one of the characteristics in the native corpus. In the overused column, CFG rules often contain the ϕ element. At first sight, this does not seem so surprising because ϕ never appears in the native corpus. However, the rules actually show in which syntactic environment missing heads tend 8To confirm that the extracted characteristics are not influenced by the differences in the domains of the two corpora, we also compared the learner data with the native speaker sub-corpus in ICNALE that is in the same domain. It turned out that the extracted CFG rules, were very similar to those shown in Table 4. to occur. For example, the CFG rule PP →ϕ S shows that prepositions tend to be missing in the prepositional phrase governing an S as in *I am good doing this, which we had not realized before this investigation. More interestingly, the CFG rule VP →ϕ ADJP reveals that an adjective phrase can form a verb phrase without a verb in learner English. Looking into the annotated data shows that the copula is missing in predicative adjectives as in the tree: S NP I VP ϕ ADJP busy This suggests the transfer of the linguistic system that the copula is not necessary or may be omitted in predicate adjectives in certain languages such as Japanese and Chinese. Similarly, the rule VP →ϕ NP shows in which environment a verb taking the object tends to be missing. Out of the 28 instances, 18 (64%) are in a subordinate clause, which implies that learners tend to omit a verb when more than one verb appear in a sentence. The second rule S →XP VP . implies that the subject NP cannot be recognized because of a combination of grammatical errors (c.f., S →NP VP .). The corpus data show that 21% of XP in S →XP VP . are actually XP-ORD concerning an error in a relative clause just as shown in the tree in Subsect. 2.2.4. Some of the learners apparently have problems in appropriately using relative clauses in the subject position. It seems that the structure of the relative clause containing another verb before the main verb confuses them. Most of the underused CFG rules are those that introduce rather complex structures. For exam1843 Overuse Score Underuse Score PP →ϕ NP 9.0 NP →NP , NP , -4.6 S →XP VP . 7.2 S →NP NP -2.7 PP →IN IN S 6.7 S →NP VP . ” -2.6 S →XP . 6.6 ADVP →NP RBR -2.5 VP →ϕ ADJP 6.5 S →S , NP VP . -2.4 VP →ϕ NP 6.3 NP →NP , SBAR -2.4 SBAR →IN NN TO S 6.1 SBAR →WHPP S -2.3 PP →ϕ S 6.1 VP →VBD SBAR -2.2 S →ADVP NP ADVP VP . 5.8 S →NP PRN VP . -2.2 PP →IN TO NP 5.7 S →PP , NP VP . ” -2.1 Table 4: Characteristic CFG rules. ple, the eighth rule VP →VBD SBAR implies a structure such as He thought that · · · . The underused CFG rules are a piece of the evidence that this population of learners of English cannot use such complex structures as fluently as native speakers do. Considering this, it will be useful feedback to provide them with the rules (transformed into interpretable forms). As in this example, phrase structure annotation should be useful not only for second language acquisition research but also for language learning assistance. 4 Parsing Performance Evaluation We tested the following two state-of-the-art parsers on the annotated data: Stanford Statistical Natural Language Parser (ver.2.0.3) (de Marneffe et al., 2006) and Charniak-Johnson parser (Charniak and Johnson, 2005). We gave the tokenized sentences to them as their inputs. We used again the EVALB tool with the Collins (1997)’s evaluation parameter. Table 5 shows the results. To our surprise, both parsers perform very well on the learner corpora despite the fact that it contains a number of grammatical errors and also syntactic tags that are not defined in PTB-II. Their performance is comparable to, or even better than, that on the Penn Treebank (reported in Petrov (2010)). To achieve further improvement, we augmented the Charniak-Johnson parser with the learner data. We first retrained its parser model using the 221 sections of Penn Treebank Wall Street Journal (hereafter, WSJ) as training data and its 24 section as development data, following the settings shown in Charniak and Johnson (2005). We then added the learner corpora to the training data using six-fold cross validation. We split it into six parts, Parser R P F CMR Stanford 0.812 0.832 0.822 0.398 Charniak-Johnson 0.845 0.865 0.855 0.465 Table 5: Parsing performance on learner English. each of which approximately consisted of 61 essays, used one sixth as test data, another one sixth as development data instead of the 24 section, and retrained the parser model using the development data and the training data consisting of the remaining four-sixths part of the learner data and the 2-21 sections of WSJ. We also conducted experiments where we copied the four sixths of the learner data n times (1 ≤n ≤50) and added them to the training data to increase its weight in retraining. Figure 1 shows the results. The simple addition of the learner data (n = 1) already outperforms the parser trained only on the 2-21 sections of WSJ (n = 0) in both recall and precision, achieving an F-measure of 0.866 and a complete match rate of 0.515. The augmented parser model particularly works well on recognizing erroneous fragments in the learner data; F-measure improved to 0.796 (n = 1) from 0.683 (n = 0) in the sentences containing fragments (i.e., FRAG) (46 out of the 111 sentences that were originally erroneously parsed made even a complete match). It was also robust against spelling errors. The performance further improves as the weight n increases (up to F = 0.878 when n = 24), which shows the effectiveness of using learner corpus data as training data. Figure 2 shows the parsing performance of the Charniak-Johnson parser in each sub-corpus of 1844 0.8 0.85 0.9 0 5 10 15 20 25 30 35 40 45 50 Performance Weight n for learner corpora Recall Precision Figure 1: Relation between learner corpus size in training data and parsing performance. ICNALE (classified by country code9). In most of the sub-corpora, the parser achieves an F-measure of 0.800 or better. By contrast, it performs much worse on the Korean sub-corpus. The major reason for this is that it contains a number of word order errors (i.e., XP-ORD); to be precise, 27 instances compared to zero to two instances in the other sub-corpora. Similarly, FRAG is a source of parsing errors in the Thai sub-corpus. We need further investigation to determine whether the differences in parsing performance are due to the writers’ mother tongue or other factors (e.g., proficiency). We can summarize the findings as follows: (1) the state-of-the-art phrase structure parsers for native English are effective even in parsing learner English; (2) they are successfully augmented by learner corpus data; (3) the evaluation results support the previous report (Tetreault et al., 2010) that they are effective in extracting parse features for grammatical error correction (and probably for related NLP tasks such as automated essay scoring); (4) however, performance may vary depending on the writer’s mother tongue and/or other factors, which we need further investigation to confirm. 5 Conclusions This paper explored phrase structure annotation and parsing specially designed for learner English. Sect. 3 showed the usefulness of our phrase structure annotation scheme and the learner corpora annotated using it. The annotation results exhibited high consistency. They also shed light on (at least, part of) the characteristics of the learners of 9Ideally, it would be better to use sub-corpora classified by their mother tongues. Unfortunately, however, only country codes are provided in ICNALE. 0.5 0.6 0.7 0.8 0.9 1 HKG JPN CHN PHL SIN TWN PAK IDN THA KOR F-measure Country code Figure 2: Parsing performance in each sub-corpus. English. Sect. 4 further reported on the performance of the two state-of-the-art parsers on the annotated corpus, suggesting that they are accurate for providing NLP applications with phrase structures in learner English. All these findings support the effectiveness of our phrase structure annotation scheme for learner English. It would be much more difficult to conduct similar analyses and investigations without the phrase structure annotation scheme and a learner corpus annotated based on it. The annotation guidelines, the annotated data, and the parsing model for learner English created in this work are now available to the public10. In our future work, we will evaluate parsing performance on other learner corpora such as ICLE (Granger et al., 2009) consisting of a wide variety of learner Englishes. We will also extend phrase structure annotation, especially working on function tags. Acknowledgments We would like to thank Shin’ichiro Ishikawa, who created the ICNALE corpus, for providing us with the data and Arne Skærholt for the assistance to run the syn-agreement tool. We would also like to thank the three anonymous reviewers for their valuable feedback. This work was partly supported by Grant-in-Aid for Young Scientists (B) Grant Number JP26750091. 10We released the Konan-JIEM corpus with phrase structures on March 2015, which is available at http://www. gsk.or.jp/en/catalog/gsk2015-a/. We annotated the existing ICNALE, which was created by Dr. Ishikawa and his colleagues, with phrase structures. We released the data on Jun 2016, which is available at http: //language.sakura.ne.jp/icnale/ 1845 References Ann Bies, Mark Ferguson, Karen Katz, and Robert MacIntyre. 1995. Bracketing guidelines for Treebank II-style Penn treebank project. Aoife Cahill, Binod Gyawali, and James V. Bruno. 2014. Self-training for parsing learner text. In Proc. of 1st Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 66–73. Aoife Cahill. 2015. Parsing learner text: to Shoehorn or not to Shoehorn. In Proc. of 9th Linguistic Annotation Workshop, pages 144–147. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine N-best parsing and MaxEnt discriminative reranking. In Proc. of 43rd Annual Meeting on Association for Computational Linguistics, pages 173– 180. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proc. of 35th Annual Meeting of the Association for Computational Linguistics, pages 16–23. Robert Dale and Adam Kilgarriff. 2011. Helping our own: The HOO 2011 pilot shared task. In Proc. of 13th European Workshop on Natural Language Generation, pages 242–249. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc. of 5th International Conference on Language Resources and Evaluation, pages 449–445. Ana D´ıaz-Negrillo, Detmar Meurers, Salvador Valera, and Holger Wunsch. 2009. Towards interlanguage POS annotation for effective learner corpora in SLA and FLT. Language Forum, 36(1–2):139–154. Markus Dickinson and Marwa Ragheb. 2009. Dependency annotation for learner corpora. In Proc. of 8th Workshop on Treebanks and Linguistic Theories, pages 59–70. Jennifer Foster and Øistein E. Andersen. 2009. GenERRate: Generating errors for use in grammatical error detection. In Proc. of 4th Workshop on Innovative Use of NLP for Building Educational Applications, pages 82–90. Jennifer Foster. 2004. Parsing ungrammatical input: An evaluation procedure. In Proc. of 4th International Conference on Language Resources and Evaluation, pages 2039–2042. Jennifer Foster. 2007a. Treebanks gone bad: generating a treebank of ungrammatical English. In 2007 Workshop on Analytics for Noisy Unstructured Data, pages 39–46, Jan. Jennifer Foster. 2007b. Treebanks gone bad: Parser evaluation and retraining using a treebank of ungrammatical sentences. International Journal on Document Analysis and Recognition, 10(3):129– 145, Dec. Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. International Corpus of Learner English v2. Presses universitaires de Louvain, Louvain. Shinichiro Ishikawa, 2011. A new horizon in learner corpus studies: The aim of the ICNALE project, pages 3–11. University of Strathclyde Publishing, Glasgow. Emi Izumi, Toyomi Saiga, Thepchai Supnithi, Kiyotaka Uchimoto, and Hitoshi Isahara. 2004. The NICT JLE Corpus: Exploiting the language learners’ speech database for research and education. International Journal of The Computer, the Internet and Management, 12(2):119–125. Stephan Kepser, Ilona Steiner, and Wolfgang Sternefeld. 2004. Annotating and querying a treebank of suboptimal structures. In Proc. of 3rd Workshop on Treebanks and Linguistic Theories, pages 63–74. Ryo Nagata, Edward Whittaker, and Vera Sheinman. 2011. Creating a manually error-tagged and shallow-parsed learner corpus. In Proc. of 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1210–1219. Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The CoNLL2013 shared task on grammatical error correction. In Proc. 17th Conference on Computational Natural Language Learning: Shared Task, pages 1–12. Niels Ott and Ramon Ziai. 2010. Evaluating dependency parsing performance on German learner language. In Proc. of 9th International Workshop on Treebanks and Linguistic Theories, pages 175–186. Slav Petrov. 2010. Products of random latent variable grammars. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27. Marwa Ragheb and Markus Dickinson. 2012. Defining syntax for learner language annotation. In Proc. of 24th International Conference on Computational Linguistics, pages 965–974. Marwa Ragheb and Markus Dickinson. 2013. Interannotator agreement for dependency annotation of learner language. In Proc. of 8th Workshop on Innovative Use of NLP for Building Educational Applications, pages 169–179. Arne Skjærholt. 2014. A chance-corrected measure of inter-annotator agreement for syntax. In Proc. of 52nd Annual Meeting of the Association for Computational Linguistics, pages 934–944. 1846 Joel Tetreault, Jennifer Foster, and Martin Chodorow. 2010. Using parse features for preposition selection and error detection. In Proc. of 48nd Annual Meeting of the Association for Computational Linguistics Short Papers, pages 353–358. Colin Warner, Arrick Lanfranchi, Tim O’Gorman, Amanda Howard, Kevin Gould, and Michael Regan. 2012. Bracketing biomedical text: An addendum to Penn Treebank II guidelines. 1847
2016
173
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1848–1858, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Trainable Spaced Repetition Model for Language Learning Burr Settles∗ Duolingo Pittsburgh, PA USA [email protected] Brendan Meeder† Uber Advanced Technologies Center Pittsburgh, PA USA [email protected] Abstract We present half-life regression (HLR), a novel model for spaced repetition practice with applications to second language acquisition. HLR combines psycholinguistic theory with modern machine learning techniques, indirectly estimating the “halflife” of a word or concept in a student’s long-term memory. We use data from Duolingo — a popular online language learning application — to fit HLR models, reducing error by 45%+ compared to several baselines at predicting student recall rates. HLR model weights also shed light on which linguistic concepts are systematically challenging for second language learners. Finally, HLR was able to improve Duolingo daily student engagement by 12% in an operational user study. 1 Introduction The spacing effect is the observation that people tend to remember things more effectively if they use spaced repetition practice (short study periods spread out over time) as opposed to massed practice (i.e., “cramming”). The phenomenon was first documented by Ebbinghaus (1885), using himself as a subject in several experiments to memorize verbal utterances. In one study, after a day of cramming he could accurately recite 12-syllable sequences (of gibberish, apparently). However, he could achieve comparable results with half as many practices spread out over three days. The lag effect (Melton, 1970) is the related observation that people learn even better if the spacing between practices gradually increases. For example, a learning schedule might begin with re∗Corresponding author. †Research conducted at Duolingo. view sessions a few seconds apart, then minutes, then hours, days, months, and so on, with each successive review stretching out over a longer and longer time interval. The effects of spacing and lag are wellestablished in second language acquisition research (Atkinson, 1972; Bloom and Shuell, 1981; Cepeda et al., 2006; Pavlik Jr and Anderson, 2008), and benefits have also been shown for gymnastics, baseball pitching, video games, and many other skills. See Ruth (1928), Dempster (1989), and Donovan and Radosevich (1999) for thorough meta-analyses spanning several decades. Most practical algorithms for spaced repetition are simple functions with a few hand-picked parameters. This is reasonable, since they were largely developed during the 1960s–80s, when people would have had to manage practice schedules without the aid of computers. However, the recent popularity of large-scale online learning software makes it possible to collect vast amounts of parallel student data, which can be used to empirically train richer statistical models. In this work, we propose half-life regression (HLR) as a trainable spaced repetition algorithm, marrying psycholinguistically-inspired models of memory with modern machine learning techniques. We apply this model to real student learning data from Duolingo, a popular language learning app, and use it to improve its large-scale, operational, personalized learning system. 2 Duolingo Duolingo is a free, award-winning, online language learning platform. Since launching in 2012, more than 150 million students from all over the world have enrolled in a Duolingo course, either via the website1 or mobile apps for Android, iOS, 1https://www.duolingo.com 1848 (a) skill tree screen (b) skill screen (c) correct response (d) incorrect response Figure 1: Duolingo screenshots for an English-speaking student learning French (iPhone app, 2016). (a) A course skill tree: golden skills have four bars and are “at full strength,” while other skills have fewer bars and are due for practice. (b) A skill screen detail (for the Gerund skill), showing which words are predicted to need practice. (c,d) Grading and explanations for a translation exercise. ´etant un enfant il est petit ˆetre.V.GER un.DET.INDF.M.SG enfant.N.SG il.PN.M.P3.SG ˆetre.V.PRES.P3.SG petit.ADJ.M.SG Figure 2: The French sentence from Figure 1(c,d) and its lexeme tags. Tags encode the root lexeme, part of speech, and morphological components (tense, gender, person, etc.) for each word in the exercise. and Windows devices. For comparison, that is more than the total number of students in U.S. elementary and secondary schools combined. At least 80 language courses are currently available or under development2 for the Duolingo platform. The most popular courses are for learning English, Spanish, French, and German, although there are also courses for minority languages (Irish Gaelic), and even constructed languages (Esperanto). More than half of Duolingo students live in developing countries, where Internet access has more than tripled in the past three years (ITU and UNESCO, 2015). The majority of these students are using Duolingo to learn English, which can significantly improve their job prospects and quality of life (Pinon and Haydon, 2010). 2.1 System Overview Duolingo uses a playfully illustrated, gamified design that combines point-reward incentives with implicit instruction (DeKeyser, 2008), mastery learning (Block et al., 1971), explanations (Fahy, 2https://incubator.duolingo.com 2004), and other best practices. Early research suggests that 34 hours of Duolingo is equivalent to a full semester of university-level Spanish instruction (Vesselinov and Grego, 2012). Figure 1(a) shows an example skill tree for English speakers learning French. This specifies the game-like curriculum: each icon represents a skill, which in turn teaches a set of thematically or grammatically related words or concepts. Students tap an icon to access lessons of new material, or to practice previously-learned material. Figure 1(b) shows a screen for the French skill Gerund, which teaches common gerund verb forms such as faisant (doing) and ´etant (being). This skill, as well as several others, have already been completed by the student. However, the Measures skill in the bottom right of Figure 1(a) has one lesson remaining. After completing each row of skills, students “unlock” the next row of more advanced skills. This is a gamelike implementation of mastery learning, whereby students must reach a certain level of prerequisite knowledge before moving on to new material. 1849 Each language course also contains a corpus (large database of available exercises) and a lexeme tagger (statistical NLP pipeline for automatically tagging and indexing the corpus; see the Appendix for details and a lexeme tag reference). Figure 1(c,d) shows an example translation exercise that might appear in the Gerund skill, and Figure 2 shows the lexeme tagger output for this sentence. Since this exercise is indexed with a gerund lexeme tag (ˆetre.V.GER in this case), it is available for lessons or practices in this skill. The lexeme tagger also helps to provide corrective feedback. Educational researchers maintain that incorrect answers should be accompanied by explanations, not simply a “wrong” mark (Fahy, 2004). In Figure 1(d), the student incorrectly used the 2nd-person verb form es (ˆetre.V.PRES.P2.SG) instead of the 3rd-person est (ˆetre.V.PRES.P3.SG). If Duolingo is able to parse the student response and detect a known grammatical mistake such as this, it provides an explanation3 in plain language. Each lesson continues until the student masters all of the target words being taught in the session, as estimated by a mixture model of short-term learning curves (Streeter, 2015). 2.2 Spaced Repetition and Practice Once a lesson is completed, all the target words being taught in the lesson are added to the student model. This model captures what the student has learned, and estimates how well she can recall this knowledge at any given time. Spaced repetition is a key component of the student model: over time, the strength of a skill will decay in the student’s long-term memory, and this model helps the student manage her practice schedule. Duolingo uses strength meters to visualize the student model, as seen beneath each of the completed skill icons in Figure 1(a). These meters represent the average probability that the student can, at any moment, correctly recall a random target word from the lessons in this skill (more on this probability estimate in §3.3). At four bars, the skill is “golden” and considered fresh in the student’s memory. At fewer bars, the skill has grown stale and may need practice. A student can tap the skill icon to access practice sessions and target her weakest words. For example, Figure 1(b) shows 3If Duolingo cannot parse the precise nature of the mistake — e.g., because of a gross typographical error — it provides a “diff” of the student’s response with the closest acceptable answer in the corpus (using Levenshtein distance). some weak words from the Gerund skill. Practice sessions are identical to lessons, except that the exercises are taken from those indexed with words (lexeme tags) due for practice according to student model. As time passes, strength meters continuously update and decay until the student practices. 3 Spaced Repetition Models In this section, we describe several spaced repetition algorithms that might be incorporated into our student model. We begin with two common, established methods in language learning technology, and then present our half-life regression model which is a generalization of them. 3.1 The Pimsleur Method Pimsleur (1967) was perhaps the first to make mainstream practical use of the spacing and lag effects, with his audio-based language learning program (now a franchise by Simon & Schuster). He referred to his method as graduated-interval recall, whereby new vocabulary is introduced and then tested at exponentially increasing intervals, interspersed with the introduction or review of other vocabulary. However, this approach is limited since the schedule is pre-recorded and cannot adapt to the learner’s actual ability. Consider an English-speaking French student who easily learns a cognate like pantalon (pants), but struggles to remember manteau (coat). With the Pimsleur method, she is forced to practice both words at the same fixed, increasing schedule. 3.2 The Leitner System Leitner (1972) proposed a different spaced repetition algorithm intended for use with flashcards. It is more adaptive than Pimsleur’s, since the spacing intervals can increase or decrease depending on student performance. Figure 3 illustrates a popular variant of this method. 1 2 4 8 16 correctly-remembered cards incorrectly-remembered cards Figure 3: The Leitner System for flashcards. The main idea is to have a few boxes that correspond to different practice intervals: 1-day, 2-day, 1850 4-day, and so on. All cards start out in the 1-day box, and if the student can remember an item after one day, it gets “promoted” to the 2-day box. Two days later, if she remembers it again, it gets promoted to the 4-day box, etc. Conversely, if she is incorrect, the card gets “demoted” to a shorter interval box. Using this approach, the hypothetical French student from §3.1 would quickly promote pantalon to a less frequent practice schedule, but continue reviewing manteau often until she can regularly remember it. Several electronic flashcard programs use the Leitner system to schedule practice, by organizing items into “virtual” boxes. In fact, when it first launched, Duolingo used a variant similar to Figure 3 to manage skill meter decay and practice. The present research was motivated by the need for a more accurate model, in response to student complaints that the Leitner-based skill meters did not adequately reflect what they had learned. 3.3 Half-Life Regression: A New Approach We now describe half-life regression (HLR), starting from psychological theory and combining it with modern machine learning techniques. Central to the theory of memory is the Ebbinghaus model, also known as the forgetting curve (Ebbinghaus, 1885). This posits that memory decays exponentially over time: p = 2−∆/h . (1) In this equation, p denotes the probability of correctly recalling an item (e.g., a word), which is a function of ∆, the lag time since the item was last practiced, and h, the half-life or measure of strength in the learner’s long-term memory. Figure 4(a) shows a forgetting curve (1) with half-life h = 1. Consider the following cases: 1. ∆= 0. The word was just recently practiced, so p = 20 = 1.0, conforming to the idea that it is fresh in memory and should be recalled correctly regardless of half-life. 2. ∆= h. The lag time is equal to the half-life, so p = 2−1 = 0.5, and the student is on the verge of being unable to remember. 3. ∆≫h. The word has not been practiced for a long time relative to its half-life, so it has probably been forgotten, e.g., p ≈0. Let x denote a feature vector that summarizes a student’s previous exposure to a particular word, and let the parameter vector Θ contain weights that correspond to each feature variable in x. Under the assumption that half-life should increase exponentially with each repeated exposure (a common practice in spacing and lag effect research), we let ˆhΘ denote the estimated half-life, given by: ˆhΘ = 2Θ·x . (2) In fact, the Pimsleur and Leitner algorithms can be interpreted as special cases of (2) using a few fixed, hand-picked weights. See the Appendix for the derivation of Θ for these two methods. For our purposes, however, we want to fit Θ empirically to learning trace data, and accommodate an arbitrarily large set of interesting features (we discuss these features more in §3.4). Suppose we have a data set D = {⟨p, ∆, x⟩i}D i=1 made up of student-word practice sessions. Each data instance consists of the observed recall rate p4, lag time ∆ since the word was last seen, and a feature vector x designed to help personalize the learning experience. Our goal is to find the best model weights Θ∗to minimize some loss function ℓ: Θ∗= arg min Θ D X i=1 ℓ(⟨p, ∆, x⟩i; Θ) . (3) To illustrate, Figure 4(b) shows a student-word learning trace over the course of a month. Each  indicates a data instance: the vertical position is the observed recall rate p for each practice session, and the horizontal distance between points is the lag time ∆between sessions. Combining (1) and (2), the model prediction ˆpΘ = 2−∆/ˆhΘ is plotted as a dashed line over time (which resets to 1.0 after each exposure, since ∆= 0). The training loss function (3) aims to fit the predicted forgetting curves to observed data points for millions of student-word learning traces like this one. We chose the L2-regularized squared loss function, which in its basic form is given by: ℓ(; Θ) = (p −ˆpΘ)2 + λ∥Θ∥2 2 , where  = ⟨p, ∆, x⟩is shorthand for the training data instance, and λ is a parameter to control the regularization term and help prevent overfitting. 4In our setting, each data instance represents a full lession or practice session, which may include multiple exercises reviewing the same word. Thus p represents the proportion of times a word was recalled correctly in a particular session. 1851 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 7 ● ● ● (a) Ebbinghaus model (h = 1) 0 0.2 0.4 0.6 0.8 1 0 5 10 15 20 25 30 ✖ ✖ ✖ ✖ ✖ ✖ (b) 30-day student-word learning trace and predicted forgetting curve Figure 4: Forgetting curves. (a) Predicted recall rate as a function of lag time ∆and half-life h = 1. (b) Example student-word learning trace over 30 days:  marks the observed recall rate p for each practice session, and half-life regression aims to fit model predictions ˆpΘ (dashed lines) to these points. In practice, we found it useful to optimize for the half-life h in addition to the observed recall rate p. Since we do not know the “true” half-life of a given word in the student’s memory — this is a hypothetical construct — we approximate it algebraically from (1) using p and ∆. We solve for h = −∆ log2(p) and use the final loss function: ℓ(; Θ) = (p −ˆpΘ)2 + α(h −ˆhΘ)2 + λ∥Θ∥2 2 , where α is a parameter to control the relative importance of the half-life term in the overall training objective function. Since ℓis smooth with respect to Θ, we can fit the weights to student-word learning traces using gradient descent. See the Appendix for more details on our training and optimization procedures. 3.4 Feature Sets In this work, we focused on features that were easily instrumented and available in the production Duolingo system, without adding latency to the student’s user experience. These features fall into two broad categories: • Interaction features: a set of counters summarizing each student’s practice history with each word (lexeme tag). These include the total number of times a student has seen the word xn, the number of times it was correctly recalled x⊕, and the number of times incorrect x⊖. These are intended to help the model make more personalized predictions. • Lexeme tag features: a large, sparse set of indicator variables, one for each lexeme tag in the system (about 20k in total). These are intended to capture the inherent difficulty of each particular word (lexeme tag). recall rate lag (days) feature vector x p (⊕/n) ∆ xn x⊕x⊖xˆetre.V.GER 1.0 (3/3) 0.6 3 2 1 1 0.5 (2/4) 1.7 6 5 1 1 1.0 (3/3) 0.7 10 7 3 1 0.8 (4/5) 4.7 13 10 3 1 0.5 (1/2) 13.5 18 14 4 1 1.0 (3/3) 2.6 20 15 5 1 Table 1: Example training instances. Each row corresponds to a data point in Figure 4(b) above, which is for a student learning the French word ´etant (lexeme tag ˆetre.V.GER). To be more concrete, imagine that the trace in Figure 4(b) is for a student learning the French word ´etant (lexeme tag ˆetre.V.GER). Table 1 shows what ⟨p, ∆, x⟩would look like for each session in the student’s history with that word. The interaction features increase monotonically5 over time, and xˆetre.V.GER is the only lexeme feature to “fire” for these instances (it has value 1, all other lexeme features have value 0). The model also includes a bias weight (intercept) not shown here. 4 Experiments In this section, we compare variants of HLR with other spaced repetition algorithms in the context of Duolingo. First, we evaluate methods against historical log data, and analyze trained model weights for insight. We then describe two controlled user experiments where we deployed HLR as part of the student model in the production system. 5Note that in practice, we found that using the square root of interaction feature counts (e.g., √x⊕) yielded better results than the raw counts shown here. 1852 Model MAE↓AUC↑CORh↑ HLR 0.128* 0.538* 0.201* HLR -lex 0.128* 0.537* 0.160* HLR -h 0.350 0.528* -0.143* HLR -lex-h 0.350 0.528* -0.142* Leitner 0.235 0.542* -0.098* Pimsleur 0.445 0.510* -0.132* LR 0.211 0.513* n/a LR -lex 0.212 0.514* n/a Constant ¯p = 0.859 0.175 n/a n/a Table 2: Evaluation results using historical log data (see text). Arrows indicate whether lower (↓) or higher (↑) scores are better. The best method for each metric is shown in bold, and statistically significant effects (p < 0.001) are marked with *. 4.1 Historical Log Data Evaluation We collected two weeks of Duolingo log data, containing 12.9 million student-word lesson and practice session traces similar to Table 1 (for all students in all courses). We then compared three categories of spaced repetition algorithms: • Half-life regression (HLR), our model from §3.3. For ablation purposes, we consider four variants: with and without lexeme features (-lex), as well as with and without the halflife term in the loss function (-h). • Leitner and Pimsleur, two established baselines that are special cases of HLR, using fixed weights. See the Appendix for a derivation of the model weights we used. • Logistic regression (LR), a standard machine learning6 baseline. We evaluate two variants: with and without lexeme features (-lex). We used the first 1 million instances of the data to tune the parameters for our training algorithm. After trying a handful of values, we settled on λ = 0.1, α = 0.01, and learning rate η = 0.001. We used these same training parameters for HLR and LR experiments (the Leitner and Pimsleur models are fixed and do not require training). 6For LR models, we include the lag time x∆as an additional feature, since — unlike HLR — it isn’t explicitly accounted for in the model. We experimented with polynomial and exponential transformations of this feature, as well, but found the raw lag time to work best. Table 2 shows the evaluation results on the full data set of 12.9 million instances, using the first 90% for training and remaining 10% for testing. We consider several different evaluation measures for a comprehensive comparison: • Mean absolute error (MAE) measures how closely predictions resemble their observed outcomes: 1 D PD i=1 |p −ˆpΘ|i. Since the strength meters in Duolingo’s interface are based on model predictions, we use MAE as a measure of prediction quality. • Area under the ROC curve (AUC) — or the Wilcoxon rank-sum test — is a measure of ranking quality. Here, it represents the probability that a model ranks a random correctlyrecalled word as more likely than a random incorrectly-recalled word. Since our model is used to prioritize words for practice, we use AUC to help evaluate these rankings. • Half-life correlation (CORh) is the Spearman rank correlation between ˆhΘ and the algebraic estimate h described in §3.3. We use this as another measure of ranking quality. For all three metrics, HLR with lexeme tag features is the best (or second best) approach, followed closely by HLR -lex (no lexeme tags). In fact, these are the only two approaches with MAE lower than a baseline constant prediction of the average recall rate in the training data (Table 2, bottom row). These HLR variants are also the only methods with positive CORh, although this seems reasonable since they are the only two to directly optimize for it. While lexeme tag features made limited impact, the h term in the HLR loss function is clearly important: MAE more than doubles without it, and the -h variants are generally worse than the other baselines on at least one metric. As stated in §3.2, Leitner was the spaced repetition algorithm used in Duolingo’s production student model at the time of this study. The Leitner method did yield the highest AUC7 values among the algorithms we tried. However, the top two HLR variants are not far behind, and they also reduce MAE compared to Leitner by least 45%. 7AUC of 0.5 implies random guessing (Fawcett, 2006), so the AUC values here may seem low. This is due in part to an inherently noisy prediction task, but also to a range restriction: ¯p = 0.859, so most words are recalled correctly and predictions tend to be high. Note that all reported AUC values are statistically significantly better than chance using a Wilcoxon rank sum test with continuity correction. 1853 Lg. Word Lexeme Tag θk EN camera camera.N.SG 0.77 EN ends end.V.PRES.P3.SG 0.38 EN circle circle.N.SG 0.08 EN rose rise.V.PST -0.09 EN performed perform.V.PP -0.48 EN writing write.V.PRESP -0.81 ES liberal liberal.ADJ.SG 0.83 ES como comer.V.PRES.P1.SG 0.40 ES encuentra encontrar.V.PRES.P3.SG 0.10 ES est´a estar.V.PRES.P3.SG -0.05 ES pensando pensar.V.GER -0.33 ES quedado quedar.V.PP.M.SG -0.73 FR visite visiter.V.PRES.P3.SG 0.94 FR suis ˆetre.V.PRES.P1.SG 0.47 FR trou trou.N.M.SG 0.05 FR dessous dessous.ADV -0.06 FR ceci ceci.PN.NT -0.45 FR fallait falloir.V.IMPERF.P3.SG -0.91 DE Baby Baby.N.NT.SG.ACC 0.87 DE sprechen sprechen.V.INF 0.56 DE sehr sehr.ADV 0.13 DE den der.DET.DEF.M.SG.ACC -0.07 DE Ihnen Sie.PN.P3.PL.DAT.FORM -0.55 DE war sein.V.IMPERF.P1.SG -1.10 Table 3: Lexeme tag weights for English (EN), Spanish (ES), French (FR), and German (DE). 4.2 Model Weight Analysis In addition to better predictions, HLR can capture the inherent difficulty of concepts that are encoded in the feature set. The “easier” concepts take on positive weights (less frequent practice resulting from longer half-lifes), while the “harder” concepts take on negative weights (more frequent practice resulting from shorter half-lifes). Table 3 shows HLR model weights for several English, Spanish, French, and German lexeme tags. Positive weights are associated with cognates and words that are common, short, or morphologically simple to inflect; it is reasonable that these would be easier to recall correctly. Negative weights are associated with irregular forms, rare words, and grammatical constructs like past or present participles and imperfective aspect. These model weights can provide insight into the aspects of language that are more or less challenging for students of a second language. Daily Retention Activity Experiment Any Lesson Practice I. HLR (v. Leitner) +0.3 +0.3 -7.3* II. HLR -lex (v. HLR) +12.0* +1.7* +9.5* Table 4: Change (%) in daily student retention for controlled user experiments. Statistically significant effects (p < 0.001) are marked with *. 4.3 User Experiment I The evaluation in §4.1 suggests that HLR is a better approach than the Leitner algorithm originally used by Duolingo (cutting MAE nearly in half). To see what effect, if any, these gains have on actual student behavior, we ran controlled user experiments in the Duolingo production system. We randomly assigned all students to one of two groups: HLR (experiment) or Leitner (control). The underlying spaced repetition algorithm determined strength meter values in the skill tree (e.g., Figure 1(a)) as well as the ranking of target words for practice sessions (e.g., Figure 1(b)), but otherwise the two conditions were identical. The experiment lasted six weeks and involved just under 1 million students. For evaluation, we examined changes in daily retention: what percentage of students who engage in an activity return to do it again the following day? We used three retention metrics: any activity (including contributions to crowdsourced translations, online forum discussions, etc.), new lessons, and practice sessions. Results are shown in the first row of Table 4. The HLR group showed a slight increase in overall activity and new lessons, but a significant decrease in practice. Prior to the experiment, many students claimed that they would practice instead of learning new material “just to keep the tree gold,” but that practice sessions did not review what they thought they needed most. This drop in practice — plus positive anecdotal feedback about stength meter quality from the HLR group — led us to believe that HLR was actually better for student engagement, so we deployed it for all students. 4.4 User Experiment II Several months later, active students pointed out that particular words or skills would decay rapidly, regardless of how often they practiced. Upon closer investigation, these complaints could be 1854 traced to lexeme tag features with highly negative weights in the HLR model (e.g., Table 3). This implied that some feature-based overfitting had occurred, despite the L2 regularization term in the training procedure. Duolingo was also preparing to launch several new language courses at the time, and no training data yet existed to fit lexeme tag feature weights for these new languages. Since the top two HLR variants were virtually tied in our §4.1 experiments, we hypothesized that using interaction features alone might alleviate both student frustration and the “cold-start” problem of training a model for new languages. In a follow-up experiment, we randomly assigned all students to one of two groups: HLR -lex (experiment) and HLR (control). The experiment lasted two weeks and involved 3.3 million students. Results are shown in the second row of Table 4. All three retention metrics were significantly higher for the HLR -lex group. The most substantial increase was for any activity, although recurring lessons and practice sessions also improved (possibly as a byproduct of the overall activity increase). Anecdotally, vocal students from the HLR -lex group who previously complained about rapid decay under the HLR model were also positive about the change. We deployed HLR -lex for all students, and believe that its improvements are at least partially responsible for the consistent 5% month-on-month growth in active Duolingo users since the model was launched. 5 Other Related Work Just as we drew upon the theories of Ebbinghaus to derive HLR as an empirical spaced repetition model, there has been other recent work drawing on other (but related) theories of memory. ACT-R (Anderson et al., 2004) is a cognitive architecture whose declarative memory module8 takes the form of a power function, in contrast to the exponential form of the Ebbinghaus model and HLR. Pavlik and Anderson (2008) used ACT-R predictions to optimize a practice schedule for second-language vocabulary, although their setting was quite different from ours. They assumed fixed intervals between practice exercises within the same laboratory session, and found that they could improve short-term learning within a ses8Declarative (specifically semantic) memory is widely regarded to govern language vocabulary (Ullman, 2005). sion. In contrast, we were concerned with making accurate recall predictions between multiple sessions “in the wild” on longer time scales. Evidence also suggests that manipulation between sessions can have greater impact on long-term learning (Cepeda et al., 2006). Motivated by long-term learning goals, the multiscale context model (MCM) has also been proposed (Mozer et al., 2009). MCM combines two modern theories of the spacing effect (Staddon et al., 2002; Raaijmakers, 2003), assuming that each time an item is practiced it creates an additional item-specific forgetting curve that decays at a different rate. Each of these forgetting curves is exponential in form (similar to HLR), but are combined via weighted average, which approximates a power law (similar to ACT-R). The authors were able to fit models to controlled laboratory data for second-language vocabulary and a few other memory tasks, on times scales up to several months. We were unaware of MCM at the time of our work, and it is unclear if the additional computational overhead would scale to Duolingo’s production system. Nevertheless, comparing to and integrating with these ideas is a promising direction for future work. There has also been work on more heuristic spaced repetition models, such as SuperMemo (Wo´zniak, 1990). Variants of this algorithm are popular alternatives to Leitner in some flashcard software, leveraging additional parameters with complex interactions to determine spacing intervals for practice. To our knowledge, these additional parameters are hand-picked as well, but one can easily imagine fitting them empirically to real student log data, as we do with HLR. 6 Conclusion We have introduced half-life regression (HLR), a novel spaced repetition algorithm with applications to second language acquisition. HLR combines a psycholinguistic model of human memory with modern machine learning techniques, and generalizes two popular algorithms used in language learning technology: Leitner and Pimsleur. We can do this by incorporating arbitrarily rich features and fitting their weights to data. This approach is significantly more accurate at predicting student recall rates than either of the previous methods, and is also better than a conventional machine learning approach like logistic regression. 1855 One result we found surprising was that lexeme tag features failed to improve predictions much, and in fact seemed to frustrate the student learning experience due to over-fitting. Instead of the sparse indicator variables used here, it may be better to decompose lexeme tags into denser and more generic features of tag components9 (e.g., part of speech, tense, gender, case), and also use corpus frequency, word length, etc. This representation might be able to capture useful and interesting regularities without negative side-effects. Finally, while we conducted a cursory analysis of model weights in §4.2, an interesting next step would be to study such weights for even deeper insight. (Note that using lexeme tag component features, as suggested above, should make this anaysis more robust since features would be less sparse.) For example, one could see whether the ranking of vocabulary and/or grammar components by feature weight is correlated with external standards such as the CEFR (Council of Europe, 2001). This and other uses of HLR hold the potential to transform data-driven curriculum design. Data and Code To faciliatate research in this area, we have publicly released our data set and code from §4.1: https://github.com/duolingo/halflife-regression. Acknowledgments Thanks to our collaborators at Duolingo, particularly Karin Tsai, Itai Hass, and Andr´e Horie for help gathering data from various parts of the system. We also thank the anonymous reviewers for suggestions that improved the final manuscript. References J.R. Anderson, D. Bothell, M.D. Byrne, S. Douglass, C. Libiere, and Y. Qin. 2004. An intergrated theory of mind. Psychological Review, 111:1036–1060. R.C. Atkinson. 1972. Optimizing the learning of a second-language vocabulary. Journal of Experimental Psychology, 96(1):124–129. J.H. Block, P.W. Airasian, B.S. Bloom, and J.B. Carroll. 1971. Mastery Learning: Theory and Practice. Holt, Rinehart, and Winston, New York. 9Engineering-wise, each lexeme tag (e.g., ˆetre.V.GER) is represented by an ID in the system. We used indicator variables in this work since the IDs are readily available; the overhead of retreiving all lexeme components would be inefficient in the production system. Of course, we could optimize for this if there were evidence of a significant improvement. K.C. Bloom and T.J. Shuell. 1981. Effects of massed and distributed practice on the learning and retention of second language vocabulary. Journal of Educational Psychology, 74:245–248. N.J. Cepeda, H. Pashler, E. Vul, J.T. Wixted, and D. Rohrer. 2006. Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3):354. Council of Europe. 2001. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge University Press. R. DeKeyser. 2008. Implicit and explicit learning. In The Handbook of Second Language Acquisition, chapter 11, pages 313–348. John Wiley & Sons. F.N. Dempster. 1989. Spacing effects and their implications for theory and practice. Educational Psychology Review, 1(4):309–330. J.J. Donovan and D.J. Radosevich. 1999. A metaanalytic review of the distribution of practice effect: Now you see it, now you don’t. Journal of Applied Psychology, 84(5):795–805. J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. H. Ebbinghaus. 1885. Memory: A Contribution to Experimental Psychology. Teachers College, Columbia University, New York, NY, USA. P.J. Fahy. 2004. Media characteristics and online learning technology. In T. Anderson and F. Elloumi, editors, Theory and Practice of Online Learning, pages 137–171. Athabasca University. T. Fawcett. 2006. An introduction to ROC analysis. Pattern Recognition Letters, 27:861–874. M.L. Forcada, M. Ginest´ı-Rosell, J. Nordfalk, J. O’Regan, S. Ortiz-Rojas, J.A. P´erez-Ortiz, F. S´anchez-Mart´ınez, G. Ram´ırez-S´anchez, and F.M. Tyers. 2011. Apertium: A free/opensource platform for rule-based machine translation. Machine Translation, 25(2):127–144. http://wiki.apertium.org/wiki/Main Page. ITU and UNESCO. 2015. The state of broadband 2015. Technical report, September. S. Leitner. 1972. So lernt man lernen. Angewandte Lernpsychologie – ein Weg zum Erfolg. Verlag Herder, Freiburg im Breisgau, Germany. A.W. Melton. 1970. The situation with respect to the spacing of repetitions and memory. Journal of Verbal Learning and Verbal Behavior, 9:596–606. M.C. Mozer, H. Pashler, N. Cepeda, R.V. Lindsey, and E. Vul. 2009. Predicting the optimal spacing of study: A multiscale context model of memory. In Advances in Neural Information Processing Systems, volume 22, pages 1321–1329. 1856 P.I. Pavlik Jr and J.R. Anderson. 2008. Using a model to compute the optimal schedule of practice. Journal of Experimental Psychology: Applied, 14(2):101—117. P. Pimsleur. 1967. A memory schedule. Modern Language Journal, 51(2):73–75. R. Pinon and J. Haydon. 2010. The benefits of the English language for individuals and societies: Quantitative indicators from Cameroon, Nigeria, Rwanda, Bangladesh and Pakistan. Technical report, Euromonitor International for the British Council. J.G.W. Raaijmakers. 2003. Spacing and repetition effects in human memory: Application of the sam model. Cognitive Science, 27(3):431–452. T.C. Ruth. 1928. Factors influencing the relative economy of massed and distributed practice in learning. Psychological Review, 35:19–45. J.E.R. Staddon, I.M. Chelaru, and J.J. Higa. 2002. Habituation, memory and the brain: The dynamics of interval timing. Behavioural Processes, 57(2):71– 88. M. Streeter. 2015. Mixture modeling of individual learning curves. In Proceedings of the International Conference on Educational Data Mining (EDM). M.T. Ullman. 2005. A cognitive neuroscience perspective on second language acquisition: The declarative/procedural model. In C. Sanz, editor, Mind and Context in Adult Second Language Acquisition: Methods, Theory, and Practice, pages 141– 178. Georgetown University Press. R. Vesselinov and J. Grego. 2012. Duolingo effectiveness study. Technical report, Queens College, City University of New York. Wikimedia Foundation. 2002. Wiktionary: A wikibased open content dictionary, retrieved 2012–2015. https://www.wiktionary.org. P.A. Wo´zniak. 1990. Optimization of learning. Master’s thesis, University of Technology in Pozna´n. A Appendix A.1 Lexeme Tagger Details We use a lexeme tagger, introduced in §2, to analyze and index the learning corpus and student responses. Since Duolingo courses teach a moderate set of words and concepts, we do not necessarily need a complete, general-purpose, multi-lingual NLP stack. Instead, for each language we use a finite state transducer (FST) to efficiently parse candidate lexeme tags10 for each word. We then use a 10The lexeme tag set is based on a large morphology dictionary created by the Apertium project (Forcada et al., 2011), which we supplemented with entries from Wiktionary (Wikimedia Foundation, 2002) and other sources. Each Duolingo course teaches about 3,000–5,000 lexeme tags. Abbreviation Meaning ACC accusative case ADJ adjective ADV adverb DAT dative case DEF definite DET determiner FORM formal register F feminine GEN genitive case GER gerund IMPERF imperfective aspect INDF indefinite INF infinitive M masculine N noun NT neuter P1/P2/P3 1st/2nd/3rd person PL plural PN pronoun PP past participle PRESP present participle PRES present tense PST past tense SG singular V verb Table 5: Lexeme tag component abbreviations. hidden Markov model (HMM) to determine which tag is correct in a given context. Consider the following two Spanish sentences: ‘Yo como manzanas’ (‘I eat apples’) and ‘Corro como el viento’ (‘I run like the wind’). For both sentences, the FST parses the word como into the lexeme tag candidates comer.V.PRES.P1.SG ([I] eat) and como.ADV.CNJ (like/as). The HMM then disambiguates between the respective tags for each sentence. Table 5 contains a reference of the abbreviations used in this paper for lexeme tags. A.2 Pimsleur and Leitner Models As mentioned in §3.3, the Pimsleur and Leitner algorithms are special cases of HLR using fixed, hand-picked weights. To see this, consider the original practice interval schedule used by Pimsleur (1967): 5 sec, 25 sec, 2 min, 10 min, 1 hr, 5 hr, 1 day, 5 days, 25 days, 4 months, and 2 years. If we interpret this as a sequence of ˆhΘ half-lifes (i.e., students should practice when ˆpΘ = 0.5), we can rewrite (2) and solve for log2(ˆhΘ) as a linear 1857 equation. This yields Θ = {xn : 2.4, xb : -16.5}, where xn and xb are the number of practices and a bias weight (intercept), respectively. This model perfectly reconstructs Pimsleur’s original schedule in days (r2 = 0.999, p ≪0.001). Analyzing the Leitner variant from Figure 3 is even simpler: this corresponds to Θ = {x⊕: 1, x⊖: -1}, where x⊕ is the number of past correct responses (i.e., doubling the interval), and x⊖is the number of incorrect responses (i.e., halving the interval). A.3 Training and Optimization Details The complete objective function given in §3.3 for half-life regression is: ℓ(⟨p, ∆, x⟩; Θ) = (p −ˆpΘ)2 + α(h −ˆhΘ)2 + λ∥Θ∥2 2 . Substituting (1) and (2) into this equation produces the following more explicit formulation: ℓ(⟨p, ∆, x⟩; Θ) =  p −2− ∆ 2Θ·x 2 + α  −∆ log2(p) −2Θ·x 2 + λ∥Θ∥2 2 . In general, the search for Θ∗weights to minimize ℓcannot be solved in closed form, but since it is a smooth function, it can be optimized using gradient methods. The partial gradient of ℓwith respect to each θk weight is given by: ∂ℓ ∂θk = 2(ˆpΘ −p) ln2(2)ˆpΘ  ∆ ˆhΘ  xk + 2α  ˆhΘ + ∆ log2(p)  ln(2)ˆhΘxk + 2λθk . In order to fit Θ to a large amount of student log data, we use AdaGrad (Duchi et al., 2011), an online algorithm for stochastic gradient descent (SGD). AdaGrad is typically less sensitive to the learning rate parameter η than standard SGD, by dynamically scaling each weight update as a function of how often the corresponding feature appears in the training data: θ(+1) k := θk −η h c(xk)−1 2 i ∂ℓ ∂θk . Here c(xk) denotes the number of times feature xk has had a nonzero value so far in the SGD pass through the training data. This is useful for training stability when using large, sparse feature sets (e.g., the lexeme tag features in this study). Note that to prevent computational overflow and underflow errors, we bound ˆpΘ ∈[0.0001, 0.9999] and ˆhΘ ∈[15 min, 9 months] in practice. 1858
2016
174
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1859–1869, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics User Modeling in Language Learning with Macaronic Texts Adithya Renduchintala and Rebecca Knowles and Philipp Koehn and Jason Eisner Department of Computer Science Johns Hopkins University {adi.r,rknowles,phi,eisner}@jhu.edu Abstract Foreign language learners can acquire new vocabulary by using cognate and context clues when reading. To measure such incidental comprehension, we devise an experimental framework that involves reading mixed-language “macaronic” sentences. Using data collected via Amazon Mechanical Turk, we train a graphical model to simulate a human subject’s comprehension of foreign words, based on cognate clues (edit distance to an English word), context clues (pointwise mutual information), and prior exposure. Our model does a reasonable job at predicting which words a user will be able to understand, which should facilitate the automatic construction of comprehensible text for personalized foreign language education. 1 Introduction Second language (L2) learning requires the acquisition of vocabulary as well as knowledge of the language’s constructions. One of the ways in which learners become familiar with novel vocabulary and constructions is through reading. According to Krashen’s Input Hypothesis (Krashen, 1989), learners acquire language through incidental learning, which occurs when learners are exposed to comprehensible input. What constitutes “comprehensible input” for a learner varies as their knowledge of the L2 increases. For example, a student in their first month of German lessons would be hard-pressed to read German novels or even front-page news, but they might understand brief descriptions of daily routines. Comprehensible input need not be completely familiar to the learner; it could include novel vocabulary items or structures (whose meanings they can glean from context). Such input falls in the “zone of proximal development” (Vygotski˘ı, 2012), just outside of the learner’s comfort zone. The related concept of “scaffolding” (Wood et al., 1976) consists of providing assistance to the learner at a level that is just sufficient for them to complete their task, which in our case is understanding a sentence. Automatic selection or construction of comprehensible input—perhaps online and personalized—would be a useful educational technology. However, this requires modeling the student: what can an L2 learner understand in a given context? In this paper, we develop a model and train its parameters on data that we collect. For the remainder of the paper we focus on native English speakers learning German. Our methodology is a novel solution to the problem of controlling for the learner’s German skill level. We use subjects with zero previous knowledge of German, but we translate portions of the sentence into English. Thus, we can presume that they do already know the English words and do not already know the German words (except from seeing them in earlier trials within our experiment). We are interested in whether they can jointly infer the meanings of the remaining German words in the sentence, so we ask them to guess. The resulting stimuli are sentences like “Der Polizist arrested the Bankr¨auber.” Even a reader with no knowledge of German is likely to be able to understand this sentence reasonably well by using cognate and context clues. We refer to this as a macaronic sentence; so-called macaronic language is a pastiche of two or more languages (often intended for humorous effect). Our experimental subjects are required to guess what “Polizist” and “Bankr¨auber” mean in this sentence. We train a featurized model to predict these guesses jointly within each sentence and thereby predict incidental comprehension on any macaronic sentence. Indeed, we hope our model design will generalize from predicting incidental comprehension on macaronic sentences (for our beginner subjects, who need some context words to be in English) to predicting incidental comprehension on full German sentences (for more ad1859 vanced students, who understand some of the context words as if they were in English). In addition, we are developing a user interface that uses macaronic sentences directly as a medium of language instruction: our companion paper (Renduchintala et al., 2016) gives an overview of that project. We briefly review previous work, then describe our data collection setup and the data obtained. Finally, we discuss our model of learner comprehension and validate our model’s predictions. 2 Previous Work Natural language processing (NLP) has long been applied to education, but the majority of this work focuses on evaluation and assessment. Prominent recent examples include Heilman and Madnani (2012), Burstein et al. (2013) and Madnani et al. (2012). Other works fall more along the lines of intelligent and adaptive tutoring systems designed to improve learning outcomes. Most of those are outside of the area of NLP (typically focusing on math or science). An overview of NLPbased work in the education sphere can be found in Litman (2016). There has also been work specific to second language acquisition, such as ¨Ozbal et al. (2014), where the focus has been to build a system to help learners retain new vocabulary. However, much of the existing work on incidental learning is found in the education and cognitive science literature rather than NLP. Our work is related to Labutov and Lipson (2014), which also tries to leverage incidental learning using mixed L1 and L2 language. Where their work uses surprisal to choose contexts in which to insert L2 vocabulary, we consider both context features and other factors such as cognate features (described in detail in 4.1). We collect data that gives direct evidence of the user’s understanding of words (by asking them to provide English guesses) rather than indirectly (via questions about sentence validity, which runs the risk of overestimating their knowledge of a word, if, for instance, they’ve only learned whether it is animate or inanimate rather than the exact meaning). Furthermore, we are not only interested in whether a mixed L1 and L2 sentence is comprehensible; we are also interested in determining a distribution over the learner’s belief state for each word in the sentence. We do this in an engaging, game-like setting, which provides the user with hints when the task is too difficult for them to complete. 3 Data Collection Setup Our method of scaffolding is to replace certain foreign words and phrases with their English translations, yielding a macaronic sentence.1 Simply presenting these to a learner would not give us feedback on the learner’s belief state for each foreign word. Even assessing the learner’s reading comprehension would give only weak indirect information about what was understood. Thus, we collect data where a learner explicitly guesses a foreign word’s translation when seen in the macaronic context. These guesses are then treated as supervised labels to train our user model. We used Amazon Mechanical Turk (MTurk) to collect data. Users qualified for tasks by completing a short quiz and survey about their language knowledge. Only users whose results indicated no knowledge of German and self-identified as native speakers of English were allowed to complete tasks. With German as the foreign language, we generated content by crawling a simplifiedGerman news website, nachrichtenleicht. de. We chose simplified German in order to minimize translation errors and to make the task more suitable for novice learners. We translated each German sentence using the Moses Statistical Machine Translation (SMT) toolkit (Koehn et al., 2007). The SMT system was trained on the German-English Commoncrawl parallel text used in WMT 2015 (Bojar et al., 2015). We used 200 German sentences, presenting each to 10 different users. In MTurk jargon, this yielded 2000 Human Intelligence Tasks (HITs). Each HIT required its user to participate in several rounds of guessing as the English translation was incrementally revealed. A user was paid US$0.12 per HIT, with a bonus of US$6 to any user who accumulated more than 2000 total points. Our HIT user interface is shown in the video at https://youtu.be/9PczEcnr4F8. 3.1 HITs and Submissions For each HIT, the user first sees a German sentence2 (Figure 1). A text box is presented below each German word in the sentence, for the user 1Although the language distinction is indicated by italics and color, users were left to figure this out on their own. 2Except that we first “translate” any German words that have identical spelling in English (case-insensitive). This includes most proper names, numerals, and punctuation marks. Such translated words are displayed in English style (blue italics), and the user is not asked to guess their meanings. 1860 Figure 1: After a user submits a set of guesses (top), the interface marks the correct guesses in green and also reveals a set of translation clues (bottom). The user now has the opportunity to guess again for the remaining German words. to type in their “best guess” of what each German word means. The user must fill in at least half of the text boxes before submitting this set of guesses. The resulting submission—i.e., the macaronic sentence together with the set of guesses—is logged in a database as a single training example, and the system displays feedback to the user about which guesses were correct. After each submission, new clues are revealed (providing increased scaffolding) and the user is asked to guess again. The process continues, yielding multiple submissions, until all German words in the sentence have been translated. At this point, the entire HIT is considered completed and the user moves to a new HIT (i.e., a new sentence). From our 2000 HITs, we obtained 9392 submissions (4.7 per HIT) from 79 distinct MTurk users. 3.2 Clues Each update provides new clues to help the user make further guesses. There are 2 kinds of clues: Translation Clue (Figure 1): A set of words that were originally in German are replaced with their English translations. The text boxes below these words disappear, since it is no longer necessary to guess them. Reordering Clue (Figure 2): A German substring is moved into a more English-like position. The reordering positions are calculated using the word and phrase alignments obtained from Moses. Each time the user submits a set of guesses, we reveal a sequence of n = max(1, round(N/3)) clues, where N is the number of German words remaining in the sentence. For each clue, we sample a token that is currently in German. If the token is Figure 2: In this case, after the user submits a set of guesses (top), two clues are revealed (bottom): ausgestellt is moved into English order and then translated. part of a movable phrase, we move that phrase; otherwise we translate the minimal phrase containing that token. These moves correspond exactly to clues that a user could request by clicking on the token in the macaronic reading interface of Renduchintala et al. (2016)—see that paper for details of how moves are constructed and animated. In our present experiments, the system is in control instead, and grants clues by “randomly clicking” on n tokens. The system’s probability of sampling a given token is proportional to its unigram type probability in the WMT corpus. Thus, rarer words tend to remain in German for longer, allowing the Turker to attempt more guesses for these difficult words. 3.3 Feedback When a user submits a set of guesses, the system responds with feedback. Each guess is visibly “marked” in left-to-right order, momentarily shaded with green (for correct), yellow (for close) or red (for incorrect). Depending on whether a guess is correct, close, or wrong, users are awarded points as discussed below. Yellow and red shading then fades, to signal to the user that they may try entering a new guess. Correct guesses remain on the screen for the entire task. 3.4 Points Adding points to the process (Figures 1–2) adds a game-like quality and lets us incentivize users by paying them for good performance (see section 3). We award 10 points for each exactly correct guess (case-insensitive). We give additional “effort points” for a guess that is close to the cor1861 rect translation, as measured by cosine similarity in vector space. (We used pre-trained GLoVe word vectors (Pennington et al., 2014); when the guess or correct translation has multiple words, we take the average of the word vectors.) We deduct effort points for guesses that are careless or very poor. Our rubric for effort points is as follows: ep =              −1, if ˆe is repeated or nonsense (red) −1, if sim(ˆe, e∗) < 0 (red) 0, if 0 ≤sim(ˆe, e∗) < 0.4 (red) 0, if ˆe is blank 10 × sim(ˆe, e∗) otherwise (yellow) Here sim(ˆe, e∗) is cosine similarity between the vector embeddings of the user’s guess ˆe and our reference translation e∗. A “nonsense” guess contains a word that does not appear in the sentence bitext nor in the 20,000 most frequent word types in the GLoVe training corpus. A “repeated” guess is an incorrect guess that appears more than once in the set of guesses being submitted. In some cases, ˆe or e∗may itself consist of multiple words. In this case, our points and feedback are based on the best match between any word of ˆe and any word of e∗. In alignments where multiple German words translate as a single phrase,3 we take the phrasal translation to be the correct answer e∗for each of the German words. 3.5 Normalization After collecting the data, we normalized the user guesses for further analysis. All guesses were lowercased. Multi-word guesses were crudely replaced by the longest word in the guess (breaking ties in favor of the earliest word). The guesses included many spelling errors as well as some nonsense strings and direct copies of the input. We defined the dictionary to be the 100,000 most frequent word types (lowercased) from the WMT English data. If a user’s guess ˆe does not match e∗and is not in the dictionary, we replace it with • the special symbol <COPY>, if ˆe appears to be a copy of the German source word f (meaning that its Levenshtein distance from f is < 0.2 · max(|ˆe|, |f|)); • else, the closest word in the dictionary4 as measured by Levenshtein distance (breaking 3Our German-English alignments are constructed as in Renduchintala et al. (2016). 4Considering only words returned by the Pyenchant ‘suggest’ function (http://pythonhosted.org/pyenchant/). ties alphabetically), provided the dictionary has a word at distance ≤2; • else <BLANK>, as if the user had not guessed. 4 User Model In each submission, the user jointly guesses several English words, given spelling and context clues. One way that a machine could perform this task is via probabilistic inference in a factor graph—and we take this as our model of how the human user solves the problem. The user observes a German sentence f = [f1, f2, . . . , fi, . . . fn]. The translation of each word token fi is Ei, which is from the user’s point of view a random variable. Let Obs denote the set of indices i for which the user also observes that Ei = e∗ i , the aligned reference translation, because e∗ i has already been guessed correctly (green feedback) or shown as a clue. Thus, the user’s posterior distribution over E is Pθ(E = e | EObs = e∗ Obs, f, history), where “history” denotes the user’s history of past interactions. We assume that a user’s submission ˆe is derived from this posterior distribution simply as a random sample. We try to fit the parameter vector θ to maximize the log-probability of the submission. Note that our model is trained on the user guesses ˆe, not the reference translations e∗. That is, we seek parameters θ that would explain why all users made their guesses. Although we fit a single θ, this does not mean that we treat users as interchangeable (since θ can include user-specific parameters) or unvarying (since our model conditions users’ behavior on their history, which can capture some learning). 4.1 Factor Graph We model the posterior distribution as a conditional random field (Figure 3) in which the value of Ei depends on the form of fi as well as on the meanings ej (which may be either observed or jointly guessed) of the context words at j ̸= i: Pθ(E = e | EObs = e∗ Obs, f, history) (1) ∝ Y i/∈Obs (ψef(ei, fi) · Y j̸=i ψee(ei, ej, i −j)) We will define the factors ψ (the potential functions) in such a way that they do not “know German” but only have access to information that is available to an naive English speaker. In brief, the 1862 f1 . . . fi . . . fn E1 . . . Ei . . . En ψee(e1, ei) ψee(ei, en) ψee(e1, en) ψef(e1, f1) ψef(ei, fi) ψef(en, fn) Figure 3: Model for user understanding of L2 words in sentential context. This figure shows an inference problem in which all the observed words in the sentence are in German (that is, Obs = ∅). As the user observes translations via clues or correctly-marked guesses, some of the Ei become shaded. factor ψef(ei, fi) considers whether the hypothesized English word ei “looks like” the observed German word fi, and whether the user has previously observed during data collection that ei is a correct or incorrect translation of fi. Meanwhile, the factor ψee(ei, ej) considers whether ei is commonly seen in the context of ej in English text. For example, the user will elevate the probability that Ei = cake if they are fairly certain that Ej is a related word like eat or chocolate. The potential functions ψ are parameterized by θ, a vector of feature weights. For convenience, we define the features in such a way that we expect their weights to be positive. We rely on just 6 features at present (see section 6 for future work), although each is complex and real-valued. Thus, the weights θ control the relative influence of these 6 different types of information on a user’s guess. Our features broadly fall under the following categories: Cognate, History, and Context. We precomputed cognate and context features, while history features are computed on-the-fly for each training instance. All features are case-insensitive. 4.1.1 Cognate and History Features For each German token fi, the ψef factor can score each possible guess ei of its translation: ψef(ei, fi) = exp(θef · φef(ei, fi)) (2) The feature function φef returns a vector of 4 real numbers: • Orthographic Similarity: The normalized Levenshtein distance between the 2 strings. φef orth(ei, fi) = 1 − lev(ei, fi) max(|ei|, |fi|) (3) The weight on this feature encodes how much users pay attention to spelling. • Pronunciation Similarity: This feature is similar to the previous one, except that it calculates the normalized distance between the pronunciations of the two words: φef pron(ei, fi) = φef orth(prn(ei), prn(fi)) (4) where the function prn(x) maps a string x to its pronunciation. We obtained pronunciations for all words in the English and German vocabularies using the CMU pronunciation dictionary tool (Weide, 1998). Note that we use English pronunciation rules even for German words. This is because we are modeling a naive learner who may, in the absence of intuition about German pronunciation rules, apply English pronunciation rules to German. • Positive History Feature: If a user has been rewarded in a previous HIT for guessing ei as a translation of fi, then they should be more likely to guess it again. We define φef hist+(ei, fi) to be 1 in this case and 0 otherwise. The weight on this feature encodes whether users learn from positive feedback. • Negative History Feature: If a user has already incorrectly guessed ei as a translation of fi in a previous submission during this HIT, then they should be less likely to guess it again. We define φef hist-(ei, fi) to be −1 in this case and 0 otherwise. The weight on this feature encodes whether users remember negative feedback.5 4.1.2 Context Features In the same way, the ψef factor can score the compatibility of a guess ei with a context word ej, which may itself be a guess, or may be observed: ψee ij (ei, ej) = exp(θee · φee(ei, ej, i −j)) (5) φee returns a vector of 2 real numbers: φee pmi(ei, ej) = ( PMI(ei, ej) if |i −j| > 1 0 otherwise (6) φee pmi1(ei, ej) = ( PMI1(ei, ej) if |i −j| = 1 0 otherwise (7) 5At least in short-term memory—this feature currently omits to consider any negative feedback from previous HITs. 1863 where the pointwise mutual information PMI(x, y) measures the degree to which the English words x, y tend to occur in the same English sentence, and PMI1(x, y) measures how often they tend to occur in adjacent positions. These measurements are estimated from the English side of the WMT corpus, with smoothing performed as in Knowles et al. (2016). For example, if fi = Suppe, the user’s guess of Ei should be influenced by fj = Brot appearing in the same sentence, if the user suspects or observes that its translation is Ej = bread. The PMI feature knows that soup and bread tend to appear in the same English sentences, whereas PMI1 knows that they tend not to appear in the bigram soup bread or bread soup. 4.1.3 User-Specific Features Apart from the basic 6-feature model, we also trained a version that includes user-specific copies of each feature (similar to the domain adaptation technique of Daum´e III (2007)). For example, φef orth,32(ei, fi) is defined to equal φef orth(ei, fi) for submissions by user 32, and defined to be 0 for submissions by other users. Thus, with 79 users in our dataset, we learned 6 × 80 feature weights: a local weight vector for each user and a global vector of “backoff” weights. The global weight θef orth is large if users in general reward orthographic similarity, while θef orth,32 (which may be positive or negative) captures the degree to which user 32 rewards it more or less than is typical. The user-specific features are intended to capture individual differences in incidental comprehension. 4.2 Inference According to our model, the probability that the user guesses Ei = ˆei is given by a marginal probability from the CRF. Computing these marginals is a combinatorial optimization problem that involves reasoning jointly about the possible values of each Ei (i /∈Obs), which range over the English vocabulary V e. We employ loopy belief propagation (Murphy et al., 1999) to obtain approximate marginals over the variables E. A tree-based schedule for message passing was used (Dreyer and Eisner, 2009, footnote 22). We run 3 iterations with a new random root for each iteration. We define the vocabulary V e to consist of all reference translations e∗ i and normalized user guesses ˆei from our entire dataset (see section 3.5), about 5K types altogether including <BLANK> and <COPY>. We define the cognate features to treat <BLANK> as the empty string and to treat <COPY> as fi. We define the PMI of these special symbols with any e to be the mean PMI with e of all dictionary words, so that they are essentially uninformative. 4.3 Parameter Estimation We learn our parameter vector θ to approximately maximize the regularized log-likelihood of the users’ guesses: X log Pθ(E = ˆe | EObs = e∗ Obs, f, history)  −λ||θ||2 (8) where the summation is over all submissions in our dataset. The gradient of each summand reduces to a difference between observed and expected values of the feature vector φ = (φef, φee), summed over all factors in (1). The observed features are computed directly by setting E = ˆe. The expected features (which arise from the log of the normalization constant of (1)) are computed approximately by loopy belief propagation. We trained θ using stochastic gradient descent (SGD),6 with a learning rate of 0.1 and regularization parameter of 0.2. The regularization parameter was tuned on our development set. 5 Experimental Results We divided our data randomly into 5550 training instances, 1903 development instances, and 1939 test instances. Each instance was a single submission from one user, consisting of a batch of “simultaneous” guesses on a macaronic sentence. We noted qualitatively that when a large number of English words have been revealed, particularly content words, the users tend to make better guesses. Conversely, when most context is German, we unsuprisingly see the user leave many guesses blank and make other guesses based on string similarity triggers. Such submissions are difficult to predict as different users will come up with a wide variety of guesses; our model therefore resorts to predicting similar-sounding words. For detailed examples of this see Appendix A. 6To speed up training, SGD was parallelized using Recht et al.’s (2011) Hogwild! algorithm. We trained for 8 epochs. 1864 Model Recall at k (dev) Recall at k (test) 1 25 50 1 25 50 Basic 15.24 34.26 38.08 16.14 35.56 40.30 User-Adapted 15.33 34.40 38.67 16.45 35.71 40.57 Table 1: Percentage of foreign words for which the user’s actual guess appears in our top-k list of predictions, for models with and without user-specific features (k ∈{1, 25, 50}). For each foreign word fi in a submission with i /∈Obs, our inference method (section 4.2) predicts a marginal probability distribution over a user’s guesses ˆei. Table 1 shows our ability to predict user guesses.7 Recall that this task is essentially a structured prediction task that does joint 4919-way classification of each German word. Roughly 1/3 of the time, our model’s top 25 words include the user’s exact guess. However, the recall reported in Table 1 is too stringent for our educational application. We could give the model partial credit for predicting a synonym of the learner’s guess ˆe. More precisely, we would like to give the model partial credit for predicting when the learner will make a poor guess of the truth e∗—even if the model does not predict the user’s specific incorrect guess ˆe. To get at this question, we use English word embeddings (as in section 3.4) as a proxy for the semantics and morphology of the words. We measure the actual quality of the learner’s guess ˆe as its cosine similarity to the truth, sim(ˆe, e∗). While quality of 1 is an exact match, and quality scores > 0.75 are consistently good matches, we found quality of ≈0.6 also reasonable. Pairs such as (mosque, islamic) and (politics, government) are examples from the collected data with quality ≈0.6. As quality becomes < 0.4, however, the relationship becomes tenuous, e.g., (refugee, soil). Similarly, we measure the predicted quality as sim(e, e∗), where e is the model’s 1-best prediction of the user’s guess. Figure 4 plots predicted vs. actual quality (each point represents one of the learner’s guesses on development data), obtaining a correlation of 0.38, which we call the “quality correlation” or QC. A clear diagonal band can be seen, corresponding to the instances where 7Throughout this section, we ignore the 5.2% of tokens on which the user did not guess (i.e., the guess was <BLANK> after the normalization of section 3.5). Our present model simply treats <BLANK> as an ordinary and very bland word (section 4.2), rather than truly attempting to predict when the user will not guess. Indeed, the model’s posterior probability of <BLANK> in these cases is a paltry 0.0000267 on average (versus 0.0000106 when the user does guess). See section 6. Figure 4: Actual quality sim(ˆe, e∗) of the learner’s guess ˆe on development data, versus predicted quality sim(e, e∗) where e is the basic model’s 1-best prediction. Figure 5: Actual quality sim(ˆe, e∗) of the learner’s guess ˆe on development data, versus the expectation of the predicted quality sim(e, e∗) where e is distributed according to the basic model’s posterior. the model exactly predicts the user’s guess. The cloud around the diagonal is formed by instances where the model’s prediction was not identical to the user’s guess but had similar quality. We also consider the expected predicted quality, averaging over the model’s predictions e of ˆe (for all e ∈V e) in proportion to the probabilities that it assigns them. This allows the model to more smoothly assess whether the learner is likely to make a high-quality guess. Figure 5 shows this version, where the points tend to shift upward and the quality correlation (QC) rises to 0.53. All QC values are given in Table 2. We used expected QC on the development set as the criterion for selecting the regularization coefficient λ and as the early stopping criterion during training. 1865 Model Dev Test Exp 1-Best Exp 1-Best Basic 0.525 0.379 0.543 0.411 User-Adapted 0.527 0.427 0.544 0.439 Table 2: Quality correlations: basic and user-adapted models. Feature Removed QC Expected 1-Best None 0.522 0.425 Cognate 0.516 0.366∗ Context 0.510 0.366∗ History 0.499∗ 0.259∗ Table 3: Impact on quality correlation (QC) of removing features from the model. Ablated QC values marked with asterisk∗differ significantly from the full-model QC values in the first row (p < 0.05, using the test of Preacher (2002)). 5.1 Feature Ablation To test the usefulness of different features, we trained our model with various feature categories disabled. To speed up experimentation, we sampled 1000 instances from the training set and trained our model on those. The resulting QC values on dev data are shown in Table 3. We see that removing history-based features has the most significant impact on model performance: both QC measures drop relative to the full model. For cognate and context features, we see no significant impact on the expected QC, but a significant drop in the 1-best QC, especially for context features. 5.2 Analysis of User Adaptation Table 2 shows that the user-specific features significantly improve the 1-best QC of our model, although the much smaller improvement in expected QC is insignificant. User adaptation allows us to discern different styles of incidental comprehension. A useradapted model makes fine-grained predictions that could help to construct better macaronic sentences for a given user. Each user who completed at least 10 HITs has their user-specific weight vector shown as a row in Figure 6. Recall that the user-specific weights are not used in isolation, but are added to backoff weights shared by all users. These user-specific weight vectors cluster into four groups. Furthermore, the average points per HIT differ by cluster (significantly between each cluster pair), reflecting the success of different strategies.8 Users in group (a) employ a generalist 8Recall that in our data collection process, we award points for each HIT (section 3.4). While the points were designed more as a reward than as an evaluation of learner success, a higher score does reflect more guesses that were corFigure 6: The user-specific weight vectors, clustered into groups. Average points per HIT for the HITs completed by each group: (a) 45, (b) 48, (c) 50 and (d) 42. strategy for incidental comprehension. They pay typical or greater-than-typical attention to all features of the current HIT, but many of them have diminished memory for vocabulary learned during past HITs (the hist+ feature). Users in group (b) seem to use the opposite strategy, deriving their success from retaining common vocabulary across HITs (hist+) and falling back on orthography for new words. Group (c) users, who earned the most points per HIT, appear to make heavy use of context and pronunciation features together with hist+. We also see that pronunciation similarity seems to be a stronger feature for group (c) users, in contrast to the more superficial orthographic similarity. Group (d), which earned the fewest points per HIT, appears to be an “extreme” version of group (b): these users pay unusually little attention to any model features other than orthographic similarity and hist+. (More precisely, the model finds group (d)’s guesses harder to predict on the basis of the available features, and so gives a more uniform distribution over V e.) 6 Future Improvements to the Model Our model’s feature set (section 4.1) could clearly be refined and extended. Indeed, in a separate paper (Knowles et al., 2016), we use a more tightly controlled experimental design to explore some simple feature variants. A cheap way to vet features would be to test whether they help on the task of modeling reference translations, which are rect or close, while a lower score indicates that some words were never guessed before the system revealed them as clues. 1866 more plentiful and less noisy than the user guesses. For Cognate features, there exist many other good string similarity metrics (including trainable ones). We could also include φef features that consider whether ei’s part of speech, frequency, and length are plausible given fi’s burstiness, observed frequency, and length. (E.g., only short common words are plausibly translated as determiners.) For Context features, we could design versions that are more sensitive to the position and status of the context word j. We speculate that the actual influence of ej on a user’s guess ei is stronger when ej is observed rather than itself guessed; when there are fewer intervening tokens (and particularly fewer observed ones); and when j < i. Orthogonally, φef(ei, ej) could go beyond PMI and windowed PMI to also consider cosine similarity, as well as variants of these metrics that are thresholded or nonlinearly transformed. Finally, we do not have to treat the context positions j as independent multiplicative influences as in equation (1) (cf. Naive Bayes): we could instead use a topic model or some form of language model to determine a conditional probability distribution over Ei given all other words in the context. An obvious gap in our current feature set is that we have no φe features to capture that some words ei ∈V e are more likely guesses a priori. By defining several versions of this feature, based on frequencies in corpora of different reading levels, we could learn user-specific weights modeling which users are unlikely to think of an obscure word. We should also include features that fire specifically on the reference translation e∗ i and the special symbols <BLANK> and <COPY>, as each is much more likely than the other features would suggest. For History features, we could consider negative feedback from other HITs (not just the current HIT), as well as positive information provided by revealed clues (not just confirmed guesses). We could also devise non-binary versions in which more recent or more frequent feedback on a word has a stronger effect. More ambitiously, we could model generalization: after being shown that Kind means child, a learner might increase the probability that the similar word Kinder means child or something related (children, childish, . . . ), whether because of superficial orthographic similarity or a deeper understanding of the morphology. Similarly, a learner might gradually acquire a model of typical spelling changes in English-German cognate pairs. A more significant extension would be to model a user’s learning process. Instead of representing each user by a small vector of user-specific weights, we could recognize that the user’s guessing strategy and knowledge can change over time. A serious deficiency in our current model (not to mention our evaluation metrics!) is that we treat <BLANK> like any other word. A more attractive approach would be to learn a stochastic link from the posterior distribution to the user’s guess or non-guess, instead of assuming that the user simply samples the guess from the posterior. As a simple example, we might say the user guesses e ∈V e with probability p(e)β—where p(e) is the posterior probability and β > 1 is a learned parameter—with the remaining probability assigned to <BLANK>. This says that the user tends to avoid guessing except when there are relatively high-probability words to guess. 7 Conclusion We have presented a methodology for collecting data and training a model to estimate a foreign language learner’s understanding of L2 vocabulary in partially understood contexts. Both are novel contributions to the study of L2 acquisition. Our current model is arguably crude, with only 6 features, yet it can already often do a reasonable job of predicting what a user might guess and whether the user’s guess will be roughly correct. This opens the door to a number of future directions with applications to language acquisition using personalized content and learners’ knowledge. We plan a deeper investigation into how learners detect and combine cues for incidental comprehension. We also leave as future work the integration of this model into an adaptive system that tracks learner understanding and creates scaffolded content that falls in their zone of proximal development, keeping them engaged while stretching their understanding. Acknowledgments This work was supported by a seed grant from the Science of Learning Institute at Johns Hopkins University, and also by a National Science Foundation Graduate Research Fellowship (Grant No. DGE-1232825) to the second author. We thank Chadia Abras, Adam Teichert, and Sanjeev Khudanpur for helpful discussions and suggestions. 1867 References Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, 2015. Jill Burstein, Joel Tetreault, and Nitin Madnani. The e-rater automated essay scoring system. In Mark D. Shermis, editor, Handbook of Automated Essay Evaluation: Current Applications and New Directions, pages 55–67. Routledge, 2013. Hal Daum´e III. Frustratingly easy domain adaptation. In Proceedings of ACL, pages 256–263, June 2007. Markus Dreyer and Jason Eisner. Graphical models over multiple strings. In Proceedings of EMNLP, pages 101–110, Singapore, August 2009. Michael Heilman and Nitin Madnani. ETS: Discriminative edit models for paraphrase scoring. In Joint Proceedings of *SEM and SemEval, pages 529–535, June 2012. Rebecca Knowles, Adithya Renduchintala, Philipp Koehn, and Jason Eisner. Analyzing learner understanding of novel L2 vocabulary. 2016. To appear. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL: Short Papers, pages 177–180, 2007. Stephen Krashen. We acquire vocabulary and spelling by reading: Additional evidence for the input hypothesis. The Modern Language Journal, 73(4):440–464, 1989. Igor Labutov and Hod Lipson. Generating codeswitched text for lexical learning. In Proceedings of ACL, pages 562–571, 2014. Diane Litman. Natural language processing for enhancing teaching and learning. In Proceedings of AAAI, 2016. Nitin Madnani, Michael Heilman, Joel Tetreault, and Martin Chodorow. Identifying high-level organizational elements in argumentative discourse. In Proceedings of NAACL-HLT, pages 20–28, 2012. Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of UAI, pages 467–475, 1999. G¨ozde ¨Ozbal, Daniele Pighin, and Carlo Strapparava. Automation and evaluation of the keyword method for second language learning. In Proceedings of ACL (Volume 2: Short Papers), pages 352–357, 2014. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GLoVe: Global vectors for word representation. In Proceedings of EMNLP, volume 14, pages 1532–1543, 2014. K. J. Preacher. Calculation for the test of the difference between two independent correlation coefficients [computer software], May 2002. Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693–701, 2011. Adithya Renduchintala, Rebecca Knowles, Philipp Koehn, and Jason Eisner. Creating interactive macaronic interfaces for language learning. In Proceedings of ACL (System Demonstrations), 2016. Lev Vygotski˘ı. Thought and Language (Revised and Expanded Edition). MIT Press, 2012. R. Weide. The CMU pronunciation dictionary, release 0.6, 1998. David Wood, Jerome S. Bruner, and Gail Ross. The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2): 89–100, 1976. 1868 Appendices A Example of Learner Guesses vs. Model Predictions To give a sense of the problem difficulty, we have hand-picked and presented two training examples (submissions) along with the predictions of our basic model and their log-probabilities. In Figure 7a a large portion of the sentence has been revealed to the user in English (blue text) only 2 words are in German. The text in bold font is the user’s guess. Our model expected both words to be guessed; the predictions are listed below the German words Verschiedene and Regierungen. The reference translation for the 2 words are Various and governments. In Figure 7b we see a much harder context where only one word is shown in English and this word is not particularly helpful as a contextual anchor. (a) (b) Figure 7: Two examples of the system’s predictions of what the user will guess on a single submission, contrasted with the user’s actual guess. (The user’s previous submissions on the same task instance are not shown.) In 7a, the model correctly expects that the substantial context will inform the user’s guess. In 7b, the model predicts that the user will fall back on string similarity—although we can see that the user’s actual guess of and day was likely informed by their guess of night, an influence that our CRF did consider. The numbers shown are log-probabilities. Both examples show the sentences in a macaronic state (after some reordering or translation has occurred). For example, the original text of the German sentence in 7b reads Deshalb durften die Paare nur noch ein Kind bekommen . The macaronic version has undergone some reordering, and has also erroneously dropped the verb due to an incorrect alignment. 1869
2016
175
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1870–1881, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics On the Similarities Between Native, Non-native and Translated Texts Ella Rabinovich△⋆ Sergiu Nisioi⋄ Noam Ordan† Shuly Wintner⋆ △IBM Haifa Research Labs ⋆Department of Computer Science, University of Haifa ⋄Solomon Marcus Center for Computational Linguistics, University of Bucharest †The Arab College for Education, Haifa {ellarabi,sergiu.nisioi,noam.ordan}@gmail.com, [email protected] Abstract We present a computational analysis of three language varieties: native, advanced non-native, and translation. Our goal is to investigate the similarities and differences between non-native language productions and translations, contrasting both with native language. Using a collection of computational methods we establish three main results: (1) the three types of texts are easily distinguishable; (2) nonnative language and translations are closer to each other than each of them is to native language; and (3) some of these characteristics depend on the source or native language, while others do not, reflecting, perhaps, unified principles that similarly affect translations and non-native language. 1 Introduction This paper addresses two linguistic phenomena: translation and non-native language. Our main goal is to investigate the similarities and differences between these two phenomena, and contrast them with native language. In particular, we are interested in the reasons for the differences between translations and originals, on one hand, and native and non-native language, on the other. Do they reflect “universal” principles, or are they dependent on the source/native language? Much research in translation studies indicates that translated texts have unique characteristics. Translated texts (in any language) constitute a sublanguage of the target language, sometimes referred to as translationese (Gellerstam, 1986). The unique characteristics of translationese have been traditionally classified into two categories: properties that stem from interference of the source language (Toury, 1979), and universal traits resulting from the translation process itself, independently of the specific source and target languages (Baker, 1993; Toury, 1995). The latter so-called translation universals have triggered a continuous debate among translation studies researchers (Mauranen and Kujam¨aki, 2004; House, 2008; Becher, 2010). Similarly, over half a century of research on second language acquisition (SLA) established the presence of cross-linguistic influences (CLI) in non-native utterances (Jarvis and Pavlenko, 2008). CLI is a cover term proposed by Kellerman and Sharwood-Smith (1986) to denote various phenomena that stem from language contact situations such as transfer, interference, avoidance, borrowing, etc.1 In addition, universal traits resulting from the learning process itself have been noticed regardless of the native language, L1.2 For example, similar developmental sequences have been observed for negation, question formation, and other sentence structures in English (Dulay and Burt, 1974; Odlin, 1989) for both Chinese and Spanish natives. Phenomena such as overgeneralization, strategies of learning (Selinker, 1972), psychological factors (Ellis, 1985), and cultural distance (Giles and Byrne, 1982) are also influential in the acquisition process. There are clear similarities between translations and non-native language: both are affected by the simultaneous presence of (at least) two linguistic systems, which may result in a higher cognitive load (Shlesinger, 2003). The presence of the L1 may also cause similar CLI effects on the target language. On the other hand, there are reasons to believe 1To avoid terminological conflicts, we shall henceforth use CLI to denote any influence of one linguistic system over another, w.r.t. both translations and non-native productions. 2For simplicity, we will use L1 to refer both to the native language of a speaker and to the source language of a translated text. We use target language to refer to second and translation languages (English in this paper). 1870 that translationese and non-native language should differ from each other. Translations are produced by native speakers of the target language. Nonnatives, in contrast, arguably never attain nativelike abilities (Coppieters, 1987; Johnson and Newport, 1991), however this hypothesis is strongly debated in the SLA community (Birdsong, 1992; Lardiere, 2006). Our goal in this work is to investigate three language varieties: the language of native speakers (N), the language of advanced, highly fluent nonnative speakers (NN), and translationese (T). We use the term constrained language to refer to the latter two varieties. We propose a unified computational umbrella for exploring two related areas of research on bilingualism: translation studies and second language acquisition. Specifically, we put forward three main hypotheses: (1) The three language varieties have unique characteristics that make them easily distinguishable. (2) Non-native language and translations are closer to each other than either of them is to native language. (3) Some of these characteristics are dependent on the specific L1, but many are not, and may reflect unified principles that similarly affect translations and non-native language. We test these hypotheses using several corpusbased computational methods. We use supervised and unsupervised classification (Section 4) to show that the three language varieties are easily distinguishable. In particular, we show that native and advanced non-native productions can be accurately separated. More pertinently, we demonstrate that non-native utterances and translations comprise two distinct linguistic systems. In Section 5, we use statistical analysis to explore the unique properties of each language variety. We show that the two varieties of constrained language are much closer to each other than they are to native language: they exhibit poorer lexical richness, a tendency to use more frequent words, a different distribution of idiomatic expressions and pronouns, and excessive use of cohesive devices. This is an unexpected finding, given that both natives and translators (in contrast to non-natives) produce texts in their mother tongue. Finally, in Section 6 we use language modeling to show that translations and non-native language exhibit similar statistical properties that clearly reflect cross-linguistic influences: experiments with distinct language families reveal salient ties between the two varieties of constrained language. The main contribution of this work is thus theoretical: it sheds light on some fundamental questions regarding bilingualism, and we expect it to motivate and drive future research in both SLA and translation studies. Moreover, a better understanding of constrained language may also have some practical import, as we briefly mention in the following section. 2 Related work Corpus-based investigation of translationese has been a prolific field of recent research, laying out an empirical foundation for the theoretically motivated hypotheses on the characteristics of translationese. More specifically, identification of translated texts by means of automatic classification shed light on the manifestation of translation universals and cross-linguistic influences as markers of translated texts (Baroni and Bernardini, 2006; van Halteren, 2008; Gaspari and Bernardini, 2008; Kurokawa et al., 2009; Koppel and Ordan, 2011; Ilisei and Inkpen, 2011; Volansky et al., 2015; Rabinovich and Wintner, 2015; Nisioi, 2015b), while Gaspari and Bernardini (2008) introduced a dataset for investigation of potential common traits between translations and non-native texts. Such studies prove to be important for the development of parallel corpora (Resnik and Smith, 2003), the improvement in quality of plagiarism detection (Potthast et al., 2011), language modeling, and statistical machine translation (Lembersky et al., 2012, 2013). Computational approaches also proved beneficial for theoretical research in second language acquisition (Jarvis and Pavlenko, 2008). Numerous studies address linguistic processes attributed to SLA, including automatic detection of highly competent non-native writers (Tomokiyo and Jones, 2001; Bergsma et al., 2012), identification of the mother tongue of English learners (Koppel et al., 2005; Tetreault et al., 2013; Tsvetkov et al., 2013; Nisioi, 2015a) and typology-driven error prediction in learners’ speech (Berzak et al., 2015). These studies are instrumental for language teaching and student evaluation (Smith and Swan, 2001), and can improve NLP applications such as authorship profiling (Estival et al., 2007) or grammatical error correction (Chodorow et al., 2010). Most of these studies utilize techniques that are motivated by the same abstract principles associ1871 ated with L1 influences on the target language. To the best of our knowledge, our work is the first to address both translations and non-native language under a unifying computational framework, and in particular to compare both with native language. 3 Methodology and experimental setup 3.1 Dataset Our dataset3 is based on the highly homogeneous corpus of the European Parliament Proceedings (Koehn, 2005). Note that the proceedings are produced as follows: (1) the utterances of the speakers are transcribed; (2) the transcriptions are sent to the speaker who may suggest minimal editing without changing the content; (3) the edited version is then translated by native speakers. Note in particular that the texts are not a product of simultaneous interpretation. In this work we utilize a subset of Europarl in which each sentence is manually annotated with speaker information, including the EU state represented and the original language in which the sentence was uttered (Nisioi et al., 2016). The texts in the corpus are uniform in terms of style, respecting the European Parliament’s formal standards. Translations are produced by native English speakers and all non-native utterances are selected from members not representing UK or Ireland. Europarl N consists of texts delivered by native speakers from England. Table 1 depicts statistics of the dataset.4 In contrast to other learner corpora such as ICLE (Granger, 2003), EFCAMDAT (Geertzen et al., 2013) or TOEFL-11 (Blanchard et al., 2013), this corpus contains translations, native, and nonnative English of high proficiency speakers. Members of the European Parliament have the right to use any of the EU’s 24 official languages when speaking in Parliament, and the fact that some of them prefer to use English suggests a high degree of confidence in their language skills. 3.2 Preprocessing All datasets were split by sentence, cleaned (text lowercased, punctuation and empty lines removed) and tokenized using the Stanford tools 3The dataset is available at http://nlp.unibuc. ro/resources.html 4Appendix A provides details on the distribution of NN and T texts by various L1s. sub-corpus sentences tokens types native (N) 60,182 1,589,215 28,004 non-native (NN) 29,734 783,742 18,419 translated (T) 738,597 22,309,296 71,144 total 828,513 24,682,253 117,567 Table 1: Europarl corpus statistics: native, nonnative and translated texts. (Manning et al., 2014). For the classification experiments we randomly shuffled the sentences within each language variety to prevent interference of other artifacts (e.g., authorship, topic) into the classification procedure. We divided the data into chunks of approximately 2,000 tokens, respecting sentence boundaries, and normalized the values of lexical features by the number of tokens in each chunk. For classification we used Platt’s sequential minimal optimization algorithm (Keerthi et al., 2001; Hall et al., 2009) to train support vector machine classifiers with the default linear kernel. In all the experiments we used (the maximal) equal amount of data from each category, thus we always randomly down-sampled the datasets in order to have a comparable number of examples in each class; specifically, 354 chunks were used for each language variety: N, NN and T. 3.3 Features The first feature set we utilized for the classification tasks comprises function words (FW), probably the most popular choice ever since Mosteller and Wallace (1963) used it successfully for the Federalist Papers. Function words proved to be suitable features for multiple reasons:(1) they abstract away from contents and are therefore less biased by topic; (2) their frequency is so high that by and large they are assumed to be selected unconsciously by authors; (3) although not easily interpretable, they are assumed to reflect grammar, and therefore facilitate the study of how structures are carried over from one language to another. We used the list of approximately 400 function words provided in Koppel and Ordan (2011). A more informative way to capture (admittedly shallow) syntax is to use part-of-speech (POS) trigrams. Triplets such as PP (personal pronoun) + VHZ (have, 3sg present) + VBN (be, past participle) reflect a complex tense form, represented distinctively across languages. In Europarl, for example, this triplet is highly frequent in translations 1872 from Finnish and Danish and much rarer in translations from Portuguese and Greek. In this work we used the top-3,000 most frequent POS trigrams in each corpus. We also used positional token frequency (Grieve, 2007). The feature is defined as counts of words occupying the first, second, third, penultimate and last positions in a sentence. The motivation behind this feature is that sentences open and close differently across languages, and it should be expected that these opening and closing devices will be transferred from L1 if they do not violate the grammaticality of the target language. Positional tokens were previously used for translationese identification (Volansky et al., 2015) and for native language detection (Nisioi, 2015a). Translations are assumed to exhibit explicitation: the tendency to render implicit utterances in the source text more explicit in the translation product. For example, causality, even though not always explicitly expressed in the source, is expressed in the target by the introduction of cohesive markers such as because, due to, etc. (BlumKulka, 1986). Similarly, Hinkel (2001) conducted a comparative analysis of explicit cohesive devices in academic texts by non-native English students, and found that cohesive markers are distributed differently in non-native English productions, compared to their native counterparts. To study this phenomenon, we used the set of over 100 cohesive markers introduced in Hinkel (2001). 4 The status of constrained language To establish the unique nature of each language variety in our dataset, we perform multiple pairwise binary classifications between N, NN, and T, as well as three-way classifications. Table 2 reports the results; the figures reflect average tenfold cross-validation accuracy (the best result in each column is boldfaced). In line with previous works (see Section 2), classification of N–T, as well as N–NN, yields excellent results with most features and feature combinations. NN–T appears to be easily distinguishable as well; specifically, FW+POS-trigrams combination with/without positional tokens yields 99.57% accuracy. The word maybe is among the most discriminative feature for NN vs. T, being overused in NN, as opposed to perhaps, which exhibits a much higher frequency in T; this may indicate a certain degree of formality, typical of translated texts (Olohan, 2003). The words or, which and too are considerably more frequent in T, implying higher sentence complexity. This trait is also reflected by shorter NN sentences, compared to T: the average sentence length in Europarl is 26 tokens for NN vs. 30 for T. Certain decisiveness devices (sure, very) are underused in T, in accordance with Toury (1995)’s law of standardization (Vanderauwera, 1985). The three-way classification yields excellent results as well; the highest accuracy is obtained using FW+positional tokens with/without POS-trigrams. feature / dataset N-NN N-T NN-T 3-way FW 98.72 98.72 96.89 96.60 POS (trigrams) 97.45 98.02 97.45 95.10 pos. tok 99.01 99.01 98.30 98.11 cohesive markers 85.59 87.14 82.06 74.19 FW+POS 99.43 99.57 99.57 99.34 FW+pos. tok 99.71 99.85 98.30 99.52 POS+pos. tok 99.57 99.57 99.01 99.15 FW+POS+pos. tok 99.85 99.85 99.57 99.52 Table 2: Pairwise and three-way classification results of N, NN and T texts. A careful inspection of the results in Table 2 reveals that NN–T classification is a slightly yet systematically harder task than N–T or N–NN; this implies that NN and T texts are more similar to each other than either of them is to N. To emphasize this last point, we analyze the separability of the three language varieties by applying unsupervised classification. We perform bisecting KMeans clustering procedure previously used for unsupervised identification of translationese by Rabinovich and Wintner (2015). Clustering of N, NN and T using function words into three clusters yields high accuracy, above 90%. For the sake of clusters’ visualization in a bidimensional plane, we applied principal component analysis for dimensionality reduction. Figure 1: Clustering of N, NN and T into three (a) and two (b) clusters using function words. Clusters’ centroids in (a) are marked by black circles; square sign stands for instances clustered wrongly. 1873 The results are depicted in Figure 1 (a). Evidently, NN and T exhibit higher mutual proximity than either of them with N. Fixing the number of expected clusters to 2 further highlights this observation, as demonstrated in Figure 1 (b): both NN and T instances were assigned to a single cluster, distinctively separable from the N cluster. We conclude that the three language varieties (N, NN, and T) constitute three different, distinguishable ontological categories, characterized by various lexical, syntactic and grammatical properties; in particular, the two varieties of constrained language (NN and T) represent two distinct linguistic systems. Nevertheless, we anticipate NN and T to share more common tendencies and regularities, when compared to N. In the following sections, we put this hypothesis to the test. 5 L1-independent similarities In this section we address L1-independent similarities between NN and T, distinguishing them from N. We focus on characteristics which are theoretically motivated by translation studies and which are considered to be L1-independent, i.e., unrelated to cross-linguistic influences. We hypothesize that linguistic devices over- or underrepresented in translation would behave similarly in highly competent non-native productions, compared to native texts. To test this hypothesis, we realized various linguistic phenomena as properties that can be easily computed from N, NN and T texts. We refer to the computed characteristics as metrics. Our hypothesis is that NN metric values will be similar to T, and that both will differ from N. We used equallysized texts of 780K tokens for N, NN and T; the exact computation is specified for each metric. For the sake of visualization, the three values of each metric (for N, NN and T) were zeroone scaled by total-sum normalization. Figure 2 graphically depicts the normalized metric values. We now describe and motivate each metric. We analyze the results in Section 5.1 and establish their statistical significance in Section 5.2. Lexical richness Translated texts tend to exhibit less lexical diversity (Al-Shabab, 1996). BlumKulka (1986) suggested that translated texts make do with less words, which is reflected by their lower type-to-token ratio (TTR) compared to that of native productions. We computed the TTR metric by dividing the number of unique (lemmatized) tokens by the total number of tokens. Mean word rank Halverson (2003) claims that translators use more prototypical language, i.e., they regress to the mean (Shlesinger, 1989). We, therefore, hypothesize that rarer words are used more often in native texts than in non-native productions and translationese. To compute this metric we used a BNC-based ranked list of 50K English words5, excluding the list of function words (see Section 3.3). The metric value was calculated by averaging the rank of all tokens in a text; tokens that do not appear in the list of 50K were excluded. Collocations Collocations are distributed differently in translations and in originals (Toury, 1980; Kenny, 2001). Common and frequent collocations are used almost subconsciously by native speakers, but will be subjected to a more careful choice by translators and, presumably, by fluent non-native speakers (Erman et al., 2014). For example, the phrase make sure appears twice more often in native Europarl texts than in NN, and five times more than in T; bear in mind has almost double frequency in N, compared to NN and T. Expressions such as: bring forward, figure out, in light of, food chain and red tape appear dozens of times in N, as opposed to zero occurrences in NN and T Europarl texts. This metric is defined by computing the frequency of idiomatic expressions6 in terms of types. Cohesive markers Translations were proven to employ cohesion intensively (Blum-Kulka, 1986; Øver˚as, 1998; Koppel and Ordan, 2011). Nonnative texts tend to use cohesive markers differently as well: sentence transitions, the major cohesion category, was shown to be overused by nonnative speakers regardless of their native language (Hinkel, 2001). The metric is defined as the frequency of sentence transitions in the three language varieties. Qualitative comparison of various markers between NN and T productions, compared to N in the Europarl texts, highlights this phenomenon: in addition is twice as frequent in NN and T than in N; according, at the same time and thus occur three times more frequently in NN and T, compared to N; moreover is used four times more fre5https://www.kilgarriff.co.uk we used the list extracted from both spoken and written text. 6Idioms were taken from https://en. wiktionary.org/wiki/Category:English_ idioms. The list was minimally cleaned up. 1874 lexical richness∗ mean word rank∗ pronouns∗ collocation (types)∗ transitions 0.3 0.35 0.4 zero-one scale normalized value native (N) translated (T) non-native (NN) Figure 2: Metric values in N, NN and T. Tree-way differences are significant in all metric categories and “*” indicates metrics with higher pairwise similarity of NN and T, compared individually to N. quently; and to conclude is almost six times more frequent. Personal pronouns We expect both non-native speakers and translators to spell out entities (both nouns and proper nouns) more frequently, as a means of explicitation (Olohan, 2002), thus leading to under-use of personal pronouns, in contrast to native texts. As an example, his and she are twice more frequent in N than in NN and T. We define this metric as the frequency of (all) personal and possessive pronouns used in the three language varieties. The over-use of personal pronouns in N utterances, is indeed balanced out by lower frequency of proper and regular nouns in these texts, compared to T and NN.7 5.1 Analysis Evidently (see Figure 2), translationese and nonnative productions exhibit a consistent pattern in both datasets, compared to native texts: NN and T systematically demonstrate lower metric values than N for all characteristics (except sentence transitions, where both NN and T expectedly share a higher value). All metrics except mean word rank exhibit substantial (sometimes dramatic) differences between N, on the one hand, and NN and T, on the other, thus corroborating our hypothesis. Mean word rank exhibits a more moderate variability in the three language varieties, yielding near identical value in NN and T; yet, it shows excessive usage in N. The differences between metric values are statistically significant for all metrics (Section 5.2). 7Normalized frequencies of nouns and proper nouns are 0.323, 0.331 and 0.345 for N, T, and NN, respectively. Moreover, in all cases (except transitions), the difference between NN and T metrics is significantly lower than the difference between either of them and N, implying a higher proximity of NN and T distributions, compared individually to N. This finding further emphasizes the common tendencies between NN and T. As shown in Figure 2, NN and T are systematically and significantly different from N. Additionally, we can see that T is consistently positioned between N and NN (except for sentence transitions), implying that translations produced by native speakers tend to resemble native utterances to a higher degree than non-native productions. 5.2 Statistical significance Inspired by the results depicted in Figure 2, we now put to test two statistical hypotheses: (1) N, NN and T productions do not represent identical underlying distributions, i.e., at least one pair is distributed differently; and consequently, (2) NN and T productions exhibit higher similarity (in terms of distance) than either of them with N. We test these hypotheses by applying the bootstrapping statistical analysis. Bootstrapping is a statistical technique involving random re-sampling (with replacement) from the original sample; it is often used to assign a measure of accuracy (e.g., a confidence interval) to an estimate. Specifically, let CN, CNN and CT denote native, non-native and translated sub-corpora of equal size (780K tokens). Let CALL denote the concatenation of all three sub-corpora, resulting in a total of 2,340M tokens. We further denote a function computing a metric m by fm; when applied to C, its value is fm(C). The sum of pair1875 wise distances between the three individual dataset metrics is denoted by Dtotal: Dtotal = |fm(CN) −fm(CNN)|+ |fm(CN) −fm(CT)| + |fm(CNN) −fm(CT)| High values of Dtotal indicate a difference between the three language varieties. To examine whether the observed Dtotal is high beyond chance level, we use the bootstrap approach, and repeat the following process 1,000 times:8 we sample CALL with replacement (at sentence granularity), generating in the j-th iteration equal-sized samples c CN j, d CNN j, c CT j. The corresponding distance estimate, therefore, is: [ Dtotal j = |fm(c CN j) −fm( d CNN j)|+ |fm(c CN j) −fm(c CT j)| + |fm( d CNN j) −fm(c CT j)| We repeat random re-sampling and computation of [ Dtotal j 1,000 times, and estimate the p-value of [ Dtotal by calculation of its percentile within the series of (sorted) [ Dtotal j values, where j ∈ (1, . . . , 1000). In all our experiments the original distance Dtotal exceeds the maximum estimate in the series of [ Dtotal j, implying highly significant difference, with p-value<0.001 for all metrics. In order to stress this outcome even further, we now test whether (the constrained) NN and T exhibit higher pairwise similarity, as opposed to N. We achieve this by assessment of the distance between NN and T productions, compared to the distance between N and its closest production (again, in terms of distance): either NN or T. We sample CN, CNN and CT (with replacement) separately, constructing f CN, g CNN and f CT, respectively, and define the following distance function: g Ddif j =|fm(f CN j)−fm( eCj K)|−|fm( g CNN j)−fm(f CT j)| where K =      NN if |fm(CN) −fm(CNN)| < |fm(CN) −fm(CT)| T otherwise We repeat re-sampling and computation of g Ddif j 1,000 times for each metric value in both 8This sample size is proven sufficient by the highly significant results (very low p-value). datasets and sort the results. The end points of the 95% confidence interval are defined by estimate values with 2.5% deviation from the minimum (min-end-point) and the maximum (maxend-point) estimates. We assess the p-value of the test by inspecting the estimate underlying the minend-point; specifically, in case the min-end-point is greater than 0, we consider p<0.05. Metric categories exhibiting higher NN-T similarity than either N-NN or N-T are marked with “*” in Figure 2. 6 L1-related similarities We hypothesize that both varieties of constrained language exhibit similar (lexical, grammatical, and structural) patterns due to the influence of L1 over the target language. Consequently, we anticipate that non-native productions of speakers of a certain native language (L1) will be closer to translations from L1 than to translations from other languages. Limited by the amount of text available for each individual language, we set out to test this hypothesis by inspection of two language families, Germanic and Romance. Specifically, the Germanic family consists of NN texts delivered by speakers from Austria, Germany, Netherlands and Sweden; and the Romance family includes NN speakers from Portugal, Italy, Spain, France and Romania. The respective T families comprise translations from Germanic and Romance originals, corresponding to the same countries. Table 3 provides details on the datasets. sentences tokens types Germanic NN 5,384 132,880 7,841 Germanic T 269,222 7,145,930 43,931 Romance NN 6,384 180,416 9,838 Romance T 307,296 9,846,215 49,925 Table 3: Europarl Germanic and Romance families: NN and T. We estimate L1-related traces in the two varieties of constrained language by the fitness of a translationese-based language model (LM) to utterances of non-native speakers from the same language family. Attempting to trace structural and grammatical, rather than content similarities, we compile five-gram POS language models from Germanic and Romance translationese (GerT and RomT, respectively).9 We examine the predic9For building LMs we used the closed vocabulary of Penn 1876 tion power of these models on non-native productions of speakers with Germanic and Romance native languages (GerNN and RomNN), hypothesizing that an LM compiled from Germanic translationese will better predict non-native productions of a Germanic speaker and vice versa. The fitness of a language model to a set of sentences is estimated in terms of perplexity (Jelinek et al., 1977). For building and estimating language models we used the KenLM toolkit (Heafield, 2011), employing modified Kneser-Ney smoothing without pruning. Compilation of language-family-specific models was done using 7M tokens of Germanic and Romance translationese each; the test data consisted of 5350 sentences of Germanic and Romance non-native productions. Consequently, for perplexity experiments with individual languages we utilized 500 sentences from each language. We excluded OOVs from all perplexity computations. Table 4 reports the results. Prediction of GerNN by the GerT language model yields a slightly lower perplexity (i.e., a better prediction) than prediction by RomT. Similarly, RomNN is much better predicted by RomT than by GerT. These differences are statistically significant: we divided the NN texts into 50 chunks of 100 sentences each, and computed perplexity values by the two LMs for each chunk. Significance was then computed by a two-tailed paired t-test, yielding p-values of 0.015 for GerNN and 6e-22 for RomNN. LM / NN GerNN LM / NN RomNN GerT 8.77 GerT 8.64 RomT 8.79 RomT 8.43 Table 4: Perplexity: fitness of Germanic and Romance translationese LMs to Germanic and Romance NN test sets. As a further corroboration of the above result, we computed the perplexity of the GerT and RomT language models with respect to the language of NN speakers, this time distinguishing speakers by their country of origin. We used the same language models and non-native test chunks of 500 sentences each. Inspired by the outcome of the previous experiment, we expect that NN productions by Germanic speakers will be better predicted by GerT LM, and vice versa. Figure 3 presents a scatter plot with the results. A clear pattern, evident from the plot, reveals Treebank POS tag set. Austria Netherlands Sweden Germany France Italy Romania Spain Portugal Perplexity by GerT Perplexity by RomT Figure 3: Perplexity of the GerT and RomT language models with respect to non-native utterances of speakers from various countries. that all English texts with underlying Romance native languages (under the diagonal) are better predicted (i.e., obtain lower perplexity) by the RomT LM. All Germanic native languages (except German), on the other hand, are better predicted by the GerT LM. This finding further supports the hypothesis that non-native productions and translationese tend to exhibit similar L1-related traits. 7 Conclusion We presented a unified computational approach for studying constrained language, where many of the features were theoretically motivated. We demonstrated that while translations and nonnative productions are two distinct language varieties, they share similarities that stem from lower lexical richness, more careful choice of idiomatic expressions and pronouns, and (presumably) subconscious excessive usage of explicitation cohesive devices. More dramatically, the language modeling experiments reveal salient ties between the native language of non-native speakers and the source language of translationese, highlighting the unified L1-related traces of L1 in both scenarios. Our findings are intriguing: native speakers and translators, in contrast to non-native speakers, use their native language, yet translation seems to gravitate towards non-native language use. The main contribution of this work is empirical, establishing the connection between these types of language production. While we believe that these common tendencies are not incidental, more research is needed in order to establish a theoretical 1877 explanation for the empirical findings, presumably (at least partially) on the basis of the cognitive load resulting from the simultaneous presence of two linguistic systems. We are interested in expanding the preliminary results of this work: we intend to replicate the experiments with more languages and more domains, investigate additional varieties of constrained language and employ more complex lexical, syntactic and discourse features. We also plan to investigate how the results vary when limited to specific L1s. Acknowledgments This research was supported by the Israeli Ministry of Science and Technology. We are immensely grateful to Yuval Nov for much advice and helpful suggestions. We are also indebted to Roy Bar-Haim, Anca Bucur, Liviu P. Dinu, and Yosi Mass for their support and guidance during the early stages of this work, and we thank our anonymous reviewers for their valuable insights. References Omar S. Al-Shabab. 1996. Interpretation and the language of translation: creativity and conventions in translation. Janus, Edinburgh. Mona Baker. 1993. Corpus linguistics and translation studies: Implications and applications. In Mona Baker, Gill Francis, and Elena TogniniBonelli, editors, Text and technology: in honour of John Sinclair, John Benjamins, Amsterdam, pages 233–252. Marco Baroni and Silvia Bernardini. 2006. A new approach to the study of Translationese: Machine-learning the difference between original and translated text. Literary and Linguistic Computing 21(3):259–274. Viktor Becher. 2010. Abandoning the notion of “translation-inherent” explicitation: Against a dogma of translation studies. Across Languages and Cultures 11(1):1–28. Shane Bergsma, Matt Post, and David Yarowsky. 2012. Stylometric analysis of scientific articles. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 327–337. Yevgeni Berzak, Roi Reichart, and Boris Katz. 2015. Contrastive analysis with predictive power: Typology driven estimation of grammatical error distributions in ESL. In Proceedings of the 19th Conference on Computational Natural Language Learning. pages 94–102. David Birdsong. 1992. Ultimate attainment in second language acquisition. Language 68(4):706– 755. Daniel Blanchard, Joel Tetreault, Derrick Higgins, Aoife Cahill, and Martin Chodorow. 2013. TOEFL11: A corpus of non-native english. ETS Research Report Series 2013(2):i–15. Shoshana Blum-Kulka. 1986. Shifts of cohesion and coherence in translation. In Juliane House and Shoshana Blum-Kulka, editors, Interlingual and intercultural communication Discourse and cognition in translation and second language acquisition studies, Gunter Narr Verlag, volume 35, pages 17–35. Martin Chodorow, Michael Gamon, and Joel Tetreault. 2010. The utility of article and preposition error correction systems for English language learners: Feedback and assessment. Language Testing 27(3):419–436. Rene Coppieters. 1987. Competence differences between native and near-native speakers. Language 63(3):544–573. Heidi C. Dulay and Marina K. Burt. 1974. Natural sequences in child second language acquisition. Language learning 24(1):37–53. Rod Ellis. 1985. Understanding Second Language Acquisition. Oxford Applied Linguistics. Oxford University Press. Britt Erman, Annika Denke, Lars Fant, and Fanny Forsberg Lundell. 2014. Nativelike expression in the speech of long-residency L2 users: A study of multiword structures in L2 English, French and Spanish. International Journal of Applied Linguistics . Dominique Estival, Tanja Gaustad, Son Bao Pham, Will Radford, and Ben Hutchinson. 2007. Author profiling for English emails. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics. pages 263–272. Federico Gaspari and Silvia Bernardini. 2008. Comparing non-native and translated language: Monolingual comparable corpora with a twist. In Proceedings of The International Symposium 1878 on Using Corpora in Contrastive and Translation Studies. Jeroen Geertzen, Theodora Alexopoulou, and Anna Korhonen. 2013. Automatic linguistic annotation of large scale L2 databases: The EFCambridge open language database (EFCAMDAT). In Proceedings of the 31st Second Language Research Forum. Cascadilla Proceedings Project, Somerville, MA. Martin Gellerstam. 1986. Translationese in Swedish novels translated from English. In Lars Wollin and Hans Lindquist, editors, Translation Studies in Scandinavia, CWK Gleerup, Lund, pages 88–95. Howard Giles and Jane L. Byrne. 1982. An intergroup approach to second language acquisition. Journal of Multilingual and Multicultural Development 3(1):17–40. Sylviane Granger. 2003. The international corpus of learner English: a new resource for foreign language learning and teaching and second language acquisition research. Tesol Quarterly pages 538–546. Jack Grieve. 2007. Quantitative authorship attribution: An evaluation of techniques. Literary and Linguistic Computing 22(3):251–270. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. SIGKDD Explorations 11(1):10–18. Sandra Halverson. 2003. The cognitive basis of translation universals. Target 15(2):197–241. Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Association for Computational Linguistics, pages 187–197. Eli Hinkel. 2001. Matters of cohesion in L2 academic texts. Applied Language Learning 12(2):111–132. Juliane House. 2008. Beyond intervention: Universals in translation? trans-kom 1(1):6–19. Iustina Ilisei and Diana Inkpen. 2011. Translationese traits in Romanian newspapers: A machine learning approach. International Journal of Computational Linguistics and Applications 2(1-2). Scott Jarvis and Aneta Pavlenko. 2008. Crosslinguistic influence in language and cognition. Routledge. Frederick Jelinek, Robert L. Mercer, Lalit R. Bahl, and J. K. Baker. 1977. Perplexity—a measure of the difficulty of speech recognition tasks. Journal of the Acoustical Society of America 62:S63. Supplement 1. Jacqueline S. Johnson and Elissa L. Newport. 1991. Critical period effects on universal properties of language: The status of subjacency in the acquisition of a second language. Cognition 39(3):215–258. S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, and K.R.K. Murthy. 2001. Improvements to platt’s smo algorithm for svm classifier design. Neural Computation 13(3):637–649. Eric Kellerman and Michael Sharwood-Smith. 1986. Crosslinguistic Influence in Second Language Acquisition. Language Teaching Methodology Series. Pearson College Division. Dorothy Kenny. 2001. Lexis and creativity in translation: a corpus-based study. St. Jerome. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. MT Summit. Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 1318–1326. Moshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005. Determining an author’s native language by mining a text for errors. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, pages 624–628. David Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic detection of translated text and its impact on machine translation. In Proceedings of MT-Summit XII. pages 81–88. Donna Lardiere. 2006. Ultimate Attainment in Second Language Acquisition: A Case Study. L. Erlbaum. Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2012. Language models for machine translation: Original vs. translated texts. Computational Linguistics 38(4):799–825. 1879 Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2013. Improving statistical machine translation by adapting translation models to translationese. Computational Linguistics 39(4):999–1023. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 55–60. A. Mauranen and P. Kujam¨aki, editors. 2004. Translation universals: Do they exist?. John Benjamins. Frederick Mosteller and David L Wallace. 1963. Inference in an authorship problem: A comparative study of discrimination methods applied to the authorship of the disputed Federalist Papers. Journal of the American Statistical Association 58(302):275–309. Sergiu Nisioi. 2015a. Feature analysis for native language identification. In Alexander F. Gelbukh, editor, Proceedings of the 16th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing 2015). Springer, Lecture Notes in Computer Science. Sergiu Nisioi. 2015b. Unsupervised classification of translated texts. In Chris Biemann, Siegfried Handschuh, Andr´e Freitas, Farid Meziane, and Elisabeth M´etais, editors, Natural Language Processing and Information Systems: Proceedings of the 20th International Conference on Applications of Natural Language to Information Systems, NLDB. Springer, volume 9103 of Lecture Notes in Computer Science, pages 323– 334. Sergiu Nisioi, Ella Rabinovich, Liviu P. Dinu, and Shuly Wintner. 2016. A corpus of native, non-native and translated texts. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, LREC 2016. Terence Odlin. 1989. Language Transfer: CrossLinguistic Influence in Language Learning. Cambridge Applied Linguistics. Cambridge University Press. Maeve Olohan. 2002. Leave it out! using a comparable corpus to investigate aspects of explicitation in translation. Cadernos de Traduc¸˜ao 1(9):153–169. Maeve Olohan. 2003. How frequent are the contractions? A study of contracted forms in the translational English corpus. Target 15(1):59– 89. Lin Øver˚as. 1998. In search of the third code: An investigation of norms in literary translation. Meta 43(4):557–570. Martin Potthast, Alberto Barr´on-Cede˜no, Benno Stein, and Paolo Rosso. 2011. Cross-language plagiarism detection. Language Resources and Evaluation 45(1):45–62. Ella Rabinovich and Shuly Wintner. 2015. Unsupervised identification of translationese. Transactions of the Association for Computational Linguistics 3:419–432. Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics 29(3):349–380. Larry Selinker. 1972. Interlanguage. International Review of Applied Linguistics in Language Teaching 10(1–4):209–232. Miriam Shlesinger. 1989. Simultaneous Interpretation as a Factor in Effecting Shifts in the Position of Texts on the Oral-literate Continuum. Master’s thesis, Tel Aviv University, Faculty of the Humanities, Department of Poetics and Comparative Literature. Miriam Shlesinger. 2003. Effects of presentation rate on working memory in simultaneous interpreting. The Interpreters Newsletter 12:37–49. Bernard Smith and Michael Swan. 2001. Learner English: A teacher’s guide to interference and other problems. Ernst Klett Sprachen. Joel Tetreault, Daniel Blanchard, and Aoife Cahill. 2013. A report on the first native language identification shared task. In Proceedings of the Eighth Workshop on Building Educational Applications Using NLP. Association for Computational Linguistics. Laura Mayfield Tomokiyo and Rosie Jones. 2001. You’re not from’round here, are you?: naive Bayes detection of non-native utterance text. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies. Association for Computational Linguistics, pages 1–8. 1880 Gideon Toury. 1979. Interlanguage and its manifestations in translation. Meta 24(2):223–231. Gideon Toury. 1980. In Search of a Theory of Translation. The Porter Institute for Poetics and Semiotics, Tel Aviv University, Tel Aviv. Gideon Toury. 1995. Descriptive Translation Studies and beyond. John Benjamins, Amsterdam / Philadelphia. Yulia Tsvetkov, Naama Twitto, Nathan Schneider, Noam Ordan, Manaal Faruqui, Victor Chahuneau, Shuly Wintner, and Chris Dyer. 2013. Identifying the L1 of non-native writers: the CMU-Haifa system. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, pages 279–287. Hans van Halteren. 2008. Source language markers in EUROPARL translations. In Donia Scott and Hans Uszkoreit, editors, COLING 2008, 22nd International Conference on Computational Linguistics, Proceedings of the Conference, 18-22 August 2008, Manchester, UK. pages 937–944. Ria Vanderauwera. 1985. Dutch Novels Translated into English: The transformation of a “minority” literature. Rodopi. Vered Volansky, Noam Ordan, and Shuly Wintner. 2015. On the features of translationese. Digital Scholarship in the Humanities 30(1):98–118. Appendix A - Distribution of L1s in Translations and Non-native Texts We assume that native languages of non-native speakers are highly correlated with (although not strictly identical to) their country of origin. country of origin tokens(T) tokens(NN) Austria 2K Belgium 67K Bulgaria 25K 6K Cyprus 35K Czech Republic 21K 3K Denmark 444K 14K Estonia 32K 50K Finland 500K 81K France 3,486K 28K Germany 3,768K 17K Greece 944K 13K Hungary 167K 38K Italy 1,690K 15K Latvia 38K 13K Lithuania 177K 18K Luxembourg 46K Malta 28K 40K Netherlands 1,746K 64K Poland 522K 36K Portugal 1,633K 54K Romania 244K 29K Slovakia 88K 6K Slovenia 43K 1K Spain 1,836K 54K Sweden 951K 52K Table 5: Distribution of L1s by country. 1881
2016
176
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1882–1892, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning Text Pair Similarity with Context-sensitive Autoencoders Hadi Amiri1, Philip Resnik1, Jordan Boyd-Graber2, Hal Daum´e III1 1Institute for Advanced Computer Studies University of Maryland, College Park, MD 2Department of Computer Science University of Colorado, Boulder, CO {hadi,resnik,hal}@umd.edu,[email protected] Abstract We present a pairwise context-sensitive Autoencoder for computing text pair similarity. Our model encodes input text into context-sensitive representations and uses them to compute similarity between text pairs. Our model outperforms the state-of-the-art models in two semantic retrieval tasks and a contextual word similarity task. For retrieval, our unsupervised approach that merely ranks inputs with respect to the cosine similarity between their hidden representations shows comparable performance with the state-of-the-art supervised models and in some cases outperforms them. 1 Introduction Representation learning algorithms learn representations that reveal intrinsic low-dimensional structure in data (Bengio et al., 2013). Such representations can be used to induce similarity between textual contents by computing similarity between their respective vectors (Huang et al., 2012; Silberer and Lapata, 2014). Recent research has made substantial progress on semantic similarity using neural networks (Rothe and Sch¨utze, 2015; Dos Santos et al., 2015; Severyn and Moschitti, 2015). In this work, we focus our attention on deep autoencoders and extend these models to integrate sentential or document context information about their inputs. We represent context information as low dimensional vectors that will be injected to deep autoencoders. To the best of our knowledge, this is the first work that enables integrating context into autoencoders. In representation learning, context may appear in various forms. For example, the context of a current sentence in a document could be either its neighboring sentences (Lin et al., 2015; Wang and Cho, 2015), topics associated with the sentence (Mikolov and Zweig, 2012; Le and Mikolov, 2014), the document that contains the sentence (Huang et al., 2012), as well as their combinations (Ji et al., 2016). It is important to integrate context into neural networks because these models are often trained with only local information about their individual inputs. For example, recurrent and recursive neural networks only use local information about previously seen words in a sentence to predict the next word or composition.1 On the other hand, context information (such as topical information) often capture global information that can guide neural networks to generate more accurate representations. We investigate the utility of context information in three semantic similarity tasks: contextual word sense similarity in which we aim to predict semantic similarity between given word pairs in their sentential context (Huang et al., 2012; Rothe and Sch¨utze, 2015), question ranking in which we aim to retrieve semantically equivalent questions with respect to a given test question (Dos Santos et al., 2015), and answer ranking in which we aim to rank single-sentence answers with respect to a given question (Severyn and Moschitti, 2015). The contributions of this paper are as follows: (1) integrating context information into deep autoencoders and (2) showing that such integration improves the representation performance of deep autoencoders across several different semantic similarity tasks. Our model outperforms the state-of-the-art su1For example, RNNs can predict the word “sky” given the sentence “clouds are in the ,” but they are less accurate when longer history or global context is required, e.g. predicting the word “french” given the paragraph “I grew up in France. . . . I speak fluent .” 1882 pervised baselines in three semantic similarity tasks. Furthermore, the unsupervised version of our autoencoder show comparable performance with the supervised baseline models and in some cases outperforms them. 2 Context-sensitive Autoencoders 2.1 Basic Autoencoders We first provide a brief description of basic autoencoders and extend them to context-sensitive ones in the next Section. Autoencoders are trained using a local unsupervised criterion (Vincent et al., 2010; Hinton and Salakhutdinov, 2006; Vincent et al., 2008). Specifically, the basic autoencoder in Figure 1(a) locally optimizes the hidden representation h of its input x such that h can be used to accurately reconstruct x, h = g(Wx + bh) (1) ˆx = g(W′h + bˆx), (2) where ˆx is the reconstruction of x, the learning parameters W ∈Rd′×d and W′ ∈Rd×d′ are weight matrices, bh ∈Rd′ and bˆx ∈Rd are bias vectors for the hidden and output layers respectively, and g is a nonlinear function such as tanh(.).2 Equation (1) encodes the input into an intermediate representation and Equation (2) decodes the resulting representation. Training a single-layer autoencoder corresponds to optimizing the learning parameters to minimize the overall loss between inputs and their reconstructions. For real-valued x, squared loss is often used, l(x) = ||x −ˆx||2, (Vincent et al., 2010): min Θ n X i=1 l(x(i)) Θ = {W, W′, bh, bˆx}. (3) This can be achieved using mini-batch stochastic gradient descent (Zeiler, 2012). 2.2 Integrating Context into Autoencoders We extend the above basic autoencoder to integrate context information about inputs. We assume that—for each training example x ∈Rd— we have a context vector cx ∈Rk that contains contextual information about the input.3 The na2If the squared loss is used for optimization, as in Equation (3), nonlinearity is often not used in Equation (2) (Vincent et al., 2010). 3We slightly abuse the notation throughout this paper by referring to cx or hi as vectors, not elements of vectors. 𝒉 𝒙 𝑾 𝒙 𝑾′ ^ (a) Basic Autoencoder 𝒉 𝒙 𝑾 𝑾′ 𝒙 ^ 𝑽 𝒉𝒄 ^ 𝒉𝒄 𝑽′ (b) Context Autoencoder Figure 1: Schematic representation of basic and context-sensitive autoencoders: (a) Basic autoencoder maps its input x into the representation h such that it can reconstruct x with minimum loss, and (b) Context-sensitive autoencoder maps its inputs x and hc into a context-sensitive representation h (hc is the representation of the context information associated to x). ture of this context vector depends on the input and target task. For example, neighboring words can be considered as the context of a target word in contextual word similarity task. We first learn the hidden representation hc ∈ Rd′ for the given context vector cx. For this, we use the same process as discussed above for the basic autoencoder where we use cx as the input in Equations (1) and (2) to obtain hc. We then use hc to develop our context-sensitive autoencoder as depicted in Figure 1(b). This autoencoder maps its inputs x and hc into a context-sensitive representation h as follows: h = g(Wx + Vhc + bh) (4) ˆx = g(W′h + bˆx) (5) ˆhc = g(V′h + bˆhc). (6) Our intuition is that if h leads to a good reconstruction of its inputs, it has retained information available in the input. Therefore, it is a contextsensitive representation. The loss function must then compute the loss between the input pair (x, hc) and its reconstruction (ˆx, ˆhc). For optimization, we can still use squared loss with a different set of parameters to minimize the overall loss on the training examples: l(x, hc) = ||x −ˆx||2 + λ||hc −ˆhc||2 min Θ n X i=1 l(x(i), h(i) c ) Θ = {W, W′, V, V′, bh, bˆx, bˆhc}, (7) 1883 𝒄𝑥 𝑽0 h1 hc 𝒙 𝑾1 𝑽1 hi-1 hi 𝑾𝑖 hc 𝑽𝑖 DAE-i DAE-1 hc DAE-0 (a) Network initialization h1 hn hc 𝒙 𝒄𝑥 𝑽0 𝑾1 𝑾𝑛 𝑽𝑛 𝑽1 𝑓𝜃(𝒙) (b) Stacking Denoising Autoencoders hc 𝒙 𝒄𝑥 hc 𝑽0 Encoder Decoder 𝑾1 𝑾𝑛 𝑽1 𝑽𝑛 𝑾𝑛𝑇 𝑾1 𝑇 𝑽𝑛𝑇 𝑽1 𝑇 𝑽0 𝑇 𝒙 ^ 𝒄𝑥 ^ ^ (c) Unrolling and Fine-tuning Figure 2: Proposed framework for integrating context into deep autoencoders. Context layer (cx and hc) and context-sensitive representation of input (hn) are shown in light red and gray respectively. (a) Pretraining properly initializes a stack of context-sensitive denoising autoencoders (DAE), (b) A contextsensitive deep autoencoder is created from properly initialized DAEs, (c) The network in (b) is unrolled and its parameters are fine-tuned for optimal reconstruction. where λ ∈[0, 1] is a weight parameter that controls the effect of context information in the reconstruction process. 2.2.1 Denoising Denoising autoencoders (DAEs) reconstruct an input from a corrupted version of it for more effective learning (Vincent et al., 2010). The corrupted input is then mapped to a hidden representation from which we obtain the reconstruction. However, the reconstruction loss is still computed with respect to the uncorrupted version of the input as before. Denoising autoencoders effectively learn representations by reversing the effect of the corruption process. We use masking noise to corrupt the inputs where a fraction η of input units are randomly selected and set to zero (Vincent et al., 2008). 2.2.2 Deep Context-Sensitive Autoencoders Autoencoders can be stacked to create deep networks. A deep autoencoder is composed of multiple hidden layers that are stacked together. The initial weights in such networks need to be properly initialized through a greedy layer-wise training approach. Random initialization does not work because deep autoencoders converge to poor local minima with large initial weights and result in tiny gradients in the early layers with small initial weights (Hinton and Salakhutdinov, 2006). Our deep context-sensitive autoencoder is composed of a stacked set of DAEs. As discussed above, we first need to properly initialize the learning parameters (weights and biases) associated to each DAE. As shown in Figure 2(a), we first train DAE-0, which initializes parameters associated to the context layer. The training procedure is exactly the same as training a basic autoencoder (Section 2.1 and Figure 1(a)).4 We then treat hc and x as “inputs” for DAE-1 and use the same approach as in training a context-sensitive autoencoder to initialize the parameters of DAE-1 (Section 2.2 and Figure 1(b)). Similarly, the ith DAE is built on the output of the (i −1)th DAE and so on until the desired number of layers (e.g. n layers) are initialized. For denoising, the corruption is only applied on “inputs” of individual autoencoders. For example, when we are training DAE-i, hi−1 and hc are first obtained from the original inputs of the network (x and cx) through a single forward pass and then their corrupted versions are computed to train DAE-i. Figure 2(b) shows that the n properly initialized DAEs can be stacked to form a deep contextsensitive autoencoder. We unroll this network to fully optimize its weights through gradient descent and backpropagation (Vincent et al., 2010; Hinton and Salakhutdinov, 2006) . 2.2.3 Unrolling and Fine-tuning We optimize the learning parameters of our initialized context-sensitive deep autoencoder by unfolding its n layers and making a 2n−1 layer net4Figure 2(a) shows compact schematic diagrams of autoencoders used in Figures 1(a) and 1(b) 1884 work whose lower layers form an “encoder” network and whose upper layers form a “decoder” network (Figure 2(c)). A global fine-tuning stage backpropagates through the entire network to finetune the weights for optimal reconstruction. In this stage, we update the network parameters again by training the network to minimize the loss between original inputs and their actual reconstruction. We backpropagate the error derivatives first through the decoder network and then through the encoder network. Each decoder layer tries to recover the input of its corresponding encoder layer. As such, the weights are initially symmetric and the decoder weights do need to be learned. After the training is complete, the hidden layer hn contains a context-sensitive representation of the inputs x and cx. 2.3 Context Information Context is task and data dependent. For example, a sentence or document that contains a target word forms the word’s context. When context information is not readily available, we use topic models to determine such context for individual inputs (Blei et al., 2003; Stevens et al., 2012). In particular, we use Non-Negative Matrix Factorization (NMF) (Lin, 2007): Given a training set with n instances, i.e., X ∈Rv×n, where v is the size of a global vocabulary and the scalar k is the number of topics in the dataset, we learn the topic matrix D ∈Rv×k and context matrix C ∈Rk×n using the following sparse coding algorithm: min D,C ∥X −DC∥2 F + µ∥C∥1, (8) s.t. D ≥0, C ≥0, where each column in C is a sparse representation of an input over all topics and will be used as global context information in our model. We obtain context vectors for test instances by transforming them according to the fitted NMF model on training data. We also note that advanced topic modeling approaches, such as syntactic topic models (Boyd-Graber and Blei, 2009), can be more effective here as they generate linguistically rich context information. 3 Text Pair Similarity We present unsupervised and supervised approaches for predicting semantic similarity scores 𝐡𝟏 𝟏 𝐡𝒏𝟏 𝒙𝟏 𝒄𝒙𝟏 𝑽0 𝑾1 𝑾𝑛 𝑽𝑛 𝑽1 𝐡𝟏 𝟐 𝐡𝒏𝟐 𝒙𝟐 𝑾1 𝑾𝑛 𝒄𝒙𝟐 𝑽0 𝑽𝑛 𝑽1 Additional features SoftMax 𝑴1 𝑴2 𝑴0 Figure 3: Pairwise context-sensitive autoencoder for computing text pair similarity. for input texts (e.g., a pair of words) each with its corresponding context information. These scores will then be used to rank “documents” against “queries” (in retrieval tasks) or evaluate how predictions of a model correlate with human judgments (in contextual word sense similarity task). In unsupervised settings, given a pair of input texts with their corresponding context vectors, (x1,cx1) and (x2,cx2), we determine their semantic similarity score by computing the cosine similarity between their hidden representations h1 n and h2 n respectively. In supervised settings, we use a copy of our context-sensitive autoencoder to make a pairwise architecture as depicted in Figure 3. Given (x1,cx1), (x2,cx2), and their binary relevance score, we use h1 n and h2 n as well as additional features (see below) to train our pairwise network (i.e. further fine-tune the weights) to predict a similarity score for the input pair as follows: rel(x1, x2) = softmax(M0a+M1h1 n+M2h2 n+b) (9) where a carries additional features, Ms are weight matrices, and b is the bias. We use the difference and similarity between the context-sensitive representations of inputs, h1 n and h2 n, as additional features: hsub = |h1 n −h2 n| hdot = h1 n ⊙h2 n, (10) where hsub and hdot capture the element-wise difference and similarity (in terms of the sign of elements in each dimension) between h1 n and h2 n, respectively. We expect elements in hsub to be small for semantically similar and relevant inputs and large otherwise. Similarly, we expect elements in hdot to be positive for relevant inputs and negative otherwise. We can use any task-specific feature as additional features. This includes features from the 1885 minimal edit sequences between parse trees of the input pairs (Heilman and Smith, 2010; Yao et al., 2013), lexical semantic features extracted from resources such as WordNet (Yih et al., 2013), or other features such as word overlap features (Severyn and Moschitti, 2015; Severyn and Moschitti, 2013). We can also use additional features (Equation 10), computed for BOW representations of the inputs x1 and x2. Such additional features improve the performance of our and baseline models. 4 Experiments In this Section, we use t-test for significant testing and asterisk mark (*) to indicate significance at α = 0.05. 4.1 Data and Context Information We use three datasets: “SCWS” a word similarity dataset with ground-truth labels on similarity of pairs of target words in sentential context from Huang et al. (2012); “qAns” a TREC QA dataset with ground-truth labels for semantically relevant questions and (single-sentence) answers from Wang et al. (2007); and “qSim” a community QA dataset crawled from Stack Exchange with ground-truth labels for semantically equivalent questions from Dos Santos et al. (2015). Table 1 shows statistics of these datasets. To enable direct comparison with previous work, we use the same training, development, and test data provided by Dos Santos et al. (2015) and Wang et al. (2007) for qSim and qAns respectively and the entire data of SCWS (in unsupervised setting). We consider local and global context for target words in SCWS. The local context of a target word is its ten neighboring words (five before and five after) (Huang et al., 2012), and its global context is a short paragraph that contains the target word (surrounding sentences). We compute average word embeddings to create context vectors for target words. Also, we consider question title and body and answer text as input in qSim and qAns and use NMF to create global context vectors for questions and answers (Section 2.3). 4.2 Parameter Setting We use pre-trained word vectors from GloVe (Pennington et al., 2014). However, because qSim questions are about specific technical topics, we only use GloVe as initialization. Data Split #Pairs %Rel SCWS All data 2003 100.0% qAns Train-All 53K 12.00% Train 4,718 7.400% Dev 1,148 19.30% Test 1,517 18.70% qSim Train 205K 0.048% Dev 43M 0.001% Test 82M 0.001% Table 1: Data statistics. (#Pairs: number of wordword pairs in SCWS, question-answer pairs in qAns, and question-question pairs in qSim; %Rel: percentage of positive pairs.) For the unsupervised SCWS task, following Huang et al. (2012), we use 100-dimensional word embeddings, d = 100, with hidden layers and context vectors of the same size, d′ = 100, k = 100. In this unsupervised setting, we set the weight parameter λ = .5, masking noise η = 0, depth of our model n = 3. Tuning these parameters will further improve the performance of our model. For qSim and qAns, we use 300-dimensional word embeddings, d = 300, with hidden layers of size d′ = 200. We set the size of context vectors k (number of topics) using the reconstruction error of NMF on training data for different values of k. This leads to k = 200 for qAns and k = 300 for qSim. We tune the other hyper-parameters (η, n, and λ) using development data. We set each input x (target words in SCWS, question titles and bodies in qSim, and question titles and single-sentence answers in qAns) to the average of word embeddings in the input. Input vectors could be initialized through more accurate approaches (Mikolov et al., 2013b; Li and Hovy, 2014); however, averaging leads to reasonable representations and is often used to initialize neural networks (Clinchant and Perronnin, 2013; Iyyer et al., 2015). 4.3 Contextual Word Similarity We first consider the contextual word similarity task in which a model should predict the semantic similarity between words in their sentential context. For this evaluation, we compute Spearman’s ρ correlation (Kokoska and Zwillinger, 2000) between the “relevance scores” predicted by different models and human judgments (Section 3). The state-of-the-art model for this task is a semi-supervised approach (Rothe and Sch¨utze, 2015). This model use resources like WordNet 1886 to compute embeddings for different senses of words. Given a pair of target words and their context (neighboring words and sentences), this model represents each target word as the average of its sense embeddings weighted by cosine similarity to the context. The cosine similarity between the representations of words in a pair is then used to determine their semantic similarity. Also, the Skip-gram model (Mikolov et al., 2013a) is extended in (Neelakantan et al., 2014; Chen et al., 2014) to learn contextual word pair similarity in an unsupervised way. Table 2 shows the performance of different models on the SCWS dataset. SAE, CSAE-LC, CSAE-LGC show the performance of our pairwise autoencoders without context, with local context, and with local and global context, respectively. In case of CSAE-LGC, we concatenate local and global context to create context vectors. CSAELGC performs significantly better than the baselines, including the semi-supervised approach in Rothe and Sch¨utze (2015). It is also interesting that SAE (without any context information) outperforms the pre-trained word embeddings (Pretrained embeds.). Comparing the performance of CSAE-LC and CSAE-LGC indicates that global context is useful for accurate prediction of semantic similarity between word pairs. We further investigate these models to understand why global context is useful. Table 3 shows an example in which global context (words in neighboring sentences) effectively help to judge the semantic similarity between “Airport” and “Airfield.” This is while local context (ten neighboring words) are less effective in helping the models to relate the two words. Furthermore, we study the effect of global context in different POS tag categories. As Figure 4 shows global context has greater impact on A-A and N-N categories. We expect high improvement in the N-N category as noun senses are fairly self-contained and often refer to concrete things. Thus broader (not only local) context is needed to judge their semantic similarity. However, we don’t know the reason for improvement on the A-A category as, in context, adjective interpretation is often affected by local context (e.g., the nouns that adjectives modify). One reason for improvement could be because adjectives are often interchangeable and this characteristic makes their meaning to be less sensitive to local context. Model Context ρ×100 Huang et al. (2012) LGC 65.7 Chen et al. (2014) LGC 65.4 Neelakantan et al. (2014) LGC 69.3 Rothe and Sch¨utze (2015) LGC 69.8 Pre-trained embeds. (GloVe) 60.2 SAE 61.1 CSAE LC 66.4 CSAE LGC 70.9* Table 2: Spearman’s ρ correlation between model predictions and human judgments in contextual word similarity. (LC: local context only, LGC: local and global context.) . . . No cases in Gibraltar were reported. The airport is built on the isthmus which the Spanish Government claim not to have been ceded in the Treaty of Utrecht. Thus the integration of Gibraltar Airport in the Single European Sky system has been blocked by Spain. The 1987 agreement for joint control of the airport with. . . . . . called “Tazi” by the German pilots. On 23 Dec 1942, the Soviet 24th Tank Corps reached nearby Skassirskaya and on 24 Dec, the tanks reached Tatsinskaya. Without any soldiers to defend the airfield it was abandoned under heavy fire. In a little under an hour, 108 Ju-52s and 16 Ju-86s took off for Novocherkassk – leaving 72 Ju-52s and many other aircraft burning on the ground. A new base was established. . . Table 3: The importance of global context (neighboring sentences) in predicting the semantically similar words (Airport, Airfield). 4.4 Answer Ranking Performance We evaluate the performance of our model in the answer ranking task in which a model should retrieve correct answers from a set of candidates for test questions. For this evaluation, we rank answers with respect to each test question according to the “relevance score” between question and each answer (Section 3). The state-of-the-art model for answer ranking on qAns is a pairwise convolutional neural network (PCNN) presented in (Severyn and Moschitti, 2015). PCNN is a supervised model that first maps input question-answer pairs to hidden representations through a standard convolutional neural network (CNN) and then utilizes these representations in a pairwise CNN to compute a relevance score for each pair. This model also utilizes external word overlap features for each questionanswer pair.5 PCNN outperforms other competing CNN models (Yu et al., 2014) and models that use 5Word overlap and IDF-weighted word overlap computed for (a): all words, and (b): only non-stop words for each question-answer pair (Severyn and Moschitti, 2015). 1887 N−N V−V A−A N−V 0.5 0.55 0.6 0.65 0.7 0.75 POS Tag Pairs Performance (ρ) CSAE−LC CSAE−LGC Figure 4: Effect of global context on contextual word similarity in different parts of speech (N: noun, V: verb, A: adjective). We only consider frequent categories. syntax and semantic features (Heilman and Smith, 2010; Yao et al., 2013). Tables 4 and 5 show the performance of different models in terms of Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR) in supervised and unsupervised settings. PCNNWO and PCNN show the baseline performance with and without word overlap features. SAE and CSAE show the performance of our pairwise autoencoders without and with context information respectively. Their “X-DST” versions show their performance when additional features (Equation 10) are used. These features are computed for the hidden and BOW representations of questionanswer pairs. We also include word overlap features as additional features. Table 4 shows that SAE and CSAE consistently outperform PCNN, and SAE-DST and CSAEDST outperform PCNN-WO when the models are trained on the larger training dataset, “Train-All.” But PCNN shows slightly better performance than our model on “Train,” the smaller training dataset. We conjecture this is because PCNN’s convolution filter is wider (n-grams, n > 2) (Severyn and Moschitti, 2015). Table 5 shows that the performance of unsupervised SAE and CSAE are comparable and in some cases better than the performance of the supervised PCNN model. We attribute the high performance of our models to context information that leads to richer representations of inputs. Furthermore, comparing the performance of CSAE and SAE in both supervised and unsupervised settings in Tables 4 and 5 shows that context information consistently improves the MAP and MRR performance at all settings except for MRR on “Train” (supervised setting) that leads to a comModel Train Train-All MAP MRR MAP MRR PCNN 62.58 65.91 67.09 72.80 SAE 65.69* 71.70* 69.54* 75.47* CSAE 67.02* 70.99* 72.29* 77.29* PCNN-WO 73.29 79.62 74.59 80.78 SAE-DST 72.53 76.97 76.38* 82.11* CSAE-DST 71.26 76.88 76.75* 82.90* Table 4: Answer ranking in supervised setting Model Train Train-All MAP MRR MAP MRR SAE 63.81 69.30 66.37 71.71 CSAE 64.86* 69.93* 66.76* 73.79* Table 5: Answer ranking in unsupervised setting. parable performance. Context-sensitive representations significantly improve the performance of our model and often lead to higher MAP than the models that ignore context information. 4.5 Question Ranking Performance In the question ranking task, given a test question, a model should retrieve top-K questions that are semantically equivalent to the test question for K = {1, 5, 10}. We use qSim for this evaluation. We compare our autoencoders against PCNN and PBOW-PCNN models presented in Dos Santos et al. (2015). PCNN is a pairwise convolutional neural network and PBOW-PCNN is a joint model that combines vector representations obtained from a pairwise bag-of-words (PBOW) network and a pairwise convolutional neural network (PCNN). Both models are supervised as they require similarity scores to train the network. Table 6 shows the performance of different models in terms of Precision at Rank K, P@K. CSAE is more precise than the baseline; CSAE and CSAE-DST models consistently outperform the baselines on P@1, an important metric in search applications (CSAE also outperforms PCNN on P@5). Although context-sensitive models are more precise than the baselines at higher ranks, the PCNN and PBOW-PCNN models remain the best model for P@10. Tables 6 and 7 show that context information consistently improves the results at all ranks in both supervised and unsupervised settings. The performance of the unsupervised SAE and CSAE models are comparable with the supervised PCNN model in higher ranks. 1888 5 10 15 20 25 30 35 40 45 3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 #epoch reconstruction error errNMF = 10.79 sAE context−sAE (a) qSim Dataset 5 10 15 20 25 30 35 40 45 3 3.5 4 4.5 5 5.5 6 #epoch reconstruction error errNMF = 9.52 sAE context−sAE (b) qAns Dataset 2 2.5 3 3.5 4 4.5 x 10 4 0.06 0.07 0.08 0.09 0.1 0.11 0.12 topic density reconstruction improvement rec improvement linear trend (c) qSim Dataset Figure 5: Reconstruction Error and Improvement: (a) and (b) reconstruction error on qSim and qAns respectively. errNMF shows the reconstruction error of NMF. Smaller error is better, (c) improvement in reconstruction error vs. topic density: greater improvement is obtained in topics with lower density. Model P@1 P@5 P@10 PCNN 20.0 33.8 40.4 SAE 16.8 29.4 32.8 CSAE 21.4 34.9 37.2 PBOW-PCNN 22.3 39.7 46.4 SAE-DST 22.2 35.9 42.0 CSAE-DST 24.6 37.9 38.9 Table 6: Question ranking in supervised setting Model P@1 P@5 P@10 SAE 17.3 32.4 32.8 CSAE 18.6 33.2 34.1 Table 7: Question ranking in unsupervised setting 5 Performance Analysis and Discussion We investigate the effect of context information in reconstructing inputs and try to understand reasons for improvement in reconstruction error. We compute the average reconstruction error of SAE and CSAE (Equations (3) and (7)). For these experiments, we set λ = 0 in Equation (7) so that we can directly compare the resulting loss of the two models. CSAE will still use context information with λ = 0 but it does not backpropagate the reconstruction loss of context information. Figures 5(a) and 5(b) show the average reconstruction error of SAE and CSAE on qSim and qAns datasets. Context information conistently improves reconstruction. The improvement is greater on qSim which contains smaller number of words per question as compared to qAns. Also, both models generate smaller reconstruction errors than NMF (Section 2.3). The lower performance of NMF is because it reconstructs inputs merely using global topics identified in datasets, while our models utilize both local and global information to reconstruct inputs. 5.1 Analysis of Context information The improvement in reconstruction error mainly stems from areas in data where “topic density” is lower. We define topic density for a topic as the number of documents that are assigned to the topic by our topic model. We compute the average improvement in reconstruction error for each topic Tj using the loss functions for the basic and contextsensitive autoencoders: ∆j = 1 |Tj| X x∈Tj l(x) −l(x, hx) where we set λ = 0. Figure 5(c) shows improvement of reconstruction error versus topic density on qSim. Lower topic densities have greater improvement. This is because they have insufficient training data to train the networks. However, injecting context information improves the reconstruction power of our model by providing more information. The improvements in denser areas are smaller because neural networks can train effectively in these areas.6 5.2 Effect of Depth The intuition behind deep autoencoders (and, generally, deep neural networks) is that each layer learns a more abstract representation of the input than the previous one (Hinton and Salakhutdinov, 2006; Bengio et al., 2013). We investigate 6We observed the same pattern in qAns. 1889 1 2 3 4 5 6 7 60 62 64 66 68 70 72 #hidden layers ρ correlation Pe−trained CSAE−LC CSAE−LGC Figure 6: Effect of depth in contextual word similarity. Three hidden layers is optimal for this task. if adding depth to our context-sensitive autoencoder will improve its performance in the contextual word similarity task. Figure 6 shows that as we increase the depth of our autoencoders, their performances initially improve. The CSAE-LGC model that uses both local and global context benefits more from greater number of hidden layers than CSAE-LC that only uses local context. We attribute this to the use of global context in CSAE-LGC that leads to more accurate representations of words in their context. We also note that with just a single hidden layer, CSAE-LGC largely improves the performance as compared to CSAE-LC. 6 Related Work Representation learning models have been effective in many tasks such as language modeling (Bengio et al., 2003; Mikolov et al., 2013b), topic modeling (Nguyen et al., 2015), paraphrase detection (Socher et al., 2011), and ranking tasks (Yih et al., 2013). We briefly review works that use context information for text representation. Huang et al. (2012) presented an RNN model that uses document-level context information to construct more accurate word representations. In particular, given a sequence of words, the approach uses other words in the document as external (global) knowledge to predict the next word in the sequence. Other approaches have also modeled context at the document level (Lin et al., 2015; Wang and Cho, 2015; Ji et al., 2016). Ji et al. (2016) presented a context-sensitive RNN-based language model that integrates representations of previous sentences into the language model of the current sentence. They showed that this approach outperforms several RNN language models on a text coherence task. Liu et al. (2015) proposed a context-sensitive RNN model that uses Latent Dirichlet Allocation (Blei et al., 2003) to extract topic-specific word embeddings. Their best-performing model regards each topic that is associated to a word in a sentence as a pseudo word, learns topic and word embeddings, and then concatenates the embeddings to obtain topic-specific word embeddings. Mikolov and Zweig (2012) extended a basic RNN language model (Mikolov et al., 2010) by an additional feature layer to integrate external information (such as topic information) about inputs into the model. They showed that such information improves the perplexity of language models. In contrast to previous research, we integrate context into deep autoencoders. To the best of our knowledge, this is the first work to do so. Also, in this paper, we depart from most previous approaches by demonstrating the value of context information in sentence-level semantic similarity and ranking tasks such as QA ranking tasks. Our approach to the ranking problems, both for Answer Ranking and Question Ranking, is different from previous approaches in the sense that we judge the relevance between inputs based on their context information. We showed that adding sentential or document context information about questions (or answers) leads to better rankings. 7 Conclusion and Future Work We introduce an effective approach to integrate sentential or document context into deep autoencoders and show that such integration is important in semantic similarity tasks. In the future, we aim to investigate other types of linguistic context (such as POS tag and word dependency information, word sense, and discourse relations) and develop a unified representation learning framework that integrates such linguistic context with representation learning models. Acknowledgments We thank anonymous reviewers for their thoughtful comments. This paper is based upon work supported, in whole or in part, with funding from the United States Government. Boyd-Graber is supported by NSF grants IIS/1320538, IIS/1409287, and NCSE/1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors. 1890 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137–1155, March. Yoshua Bengio, Aaron Courville, and Pierre Vincent. 2013. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8). David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Jordan L Boyd-Graber and David M Blei. 2009. Syntactic topic models. In Proceedings of NIPS. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of EMNLP. St´ephane Clinchant and Florent Perronnin. 2013. Aggregating continuous word embeddings for information retrieval. the Workshop on Continuous Vector Space Models and their Compositionality, ACL. Cicero Dos Santos, Luciano Barbosa, Dasha Bogdanova, and Bianca Zadrozny. 2015. Learning hybrid representations to retrieve semantically equivalent questions. In Proceedings of ACL-IJCNLP. Michael Heilman and Noah A Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Proceedings of NAACL. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of ACL-IJCNLP. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2016. Document context language models. ICLR (Workshop track). S. Kokoska and D. Zwillinger. 2000. CRC Standard Probability and Statistics Tables and Formulae, Student Edition. Taylor & Francis. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of ICML. Jiwei Li and Eduard Hovy. 2014. A model of coherence based on distributed sentence representation. In Proceedings of EMNLP. Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of EMNLP. Chuan-bi Lin. 2007. Projected gradient methods for nonnegative matrix factorization. Neural computation, 19(10):2756–2779. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In Proceedings of AAAI. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In Spoken Language Technologies. IEEE. Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of INTERSPEECH. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of the EMNLP. Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving topic models with latent feature word representations. TACL, 3:299– 313. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of ACLIJNLP. Aliaksei Severyn and Alessandro Moschitti. 2013. Automatic feature engineering for answer selection and extraction. In Proceedings of EMNLP. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of SIGIR. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of ACL. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS. 1891 Keith Stevens, Philip Kegelmeyer, David Andrzejewski, and David Buttler. 2012. Exploring topic coherence over many models and many topics. In Proceedings of EMNLP-CNNL. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings ICML. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371–3408. Tian Wang and Kyunghyun Cho. 2015. Larger-context language modelling. CoRR, abs/1511.03729. Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? a quasisynchronous grammar for QA. In Proceedings of EMNLP-CoNLL. Xuchen Yao, Benjamin Van Durme, Chris Callisonburch, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In Proceedings of NAACL. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of ACL. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. In NIPS, Deep Learning Workshop. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. 1892
2016
177
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1893–1902, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Linguistic Benchmarks of Online News Article Quality Ioannis Arapakis∗ Eurecat Barcelona, Spain [email protected] Filipa Peleja∗ Vodafone Lisbon, Portugal [email protected] B. Barla Cambazoglu∗ Independent Researcher [email protected] Joao Magalhaes NOVA-LINCS, DI, FCT Universidade NOVA Lisboa, Portugal [email protected] Abstract Online news editors ask themselves the same question many times: what is missing in this news article to go online? This is not an easy question to be answered by computational linguistic methods. In this work, we address this important question and characterise the constituents of news article editorial quality. More specifically, we identify 14 aspects related to the content of news articles. Through a correlation analysis, we quantify their independence and relation to assessing an article’s editorial quality. We also demonstrate that the identified aspects, when combined together, can be used effectively in quality control methods for online news. 1 Introduction A recent study1 found that online news is nowadays the main source of news for the population in the 18-29 age group (71%), and as popular as TV in the 30-39 age group (63%). The readers appetite for high-quality online news result in an offer of thousands of articles published every day in the whole of the Web. For instance, it is not uncommon to find the same facts reported by many different online news articles. However, only a few of them actually grab the attention of the readers. Journalists and editors follow standardised discourse rules and techniques aiming at engaging the reader in the article’s narrative of article (Louis and Nenkova, 2013). Analysing the discourse of such articles is central to properly assessing the quality of online ∗This work was done while the authors were at Yahoo!Research Barcelona. 1http://www.people-press.org/2013/08/08/amidcriticism-support-for-medias-watchdog-role-stands-out news (van Dijk and Kintsch, 1983). Defining the variables that computational linguistics should quantify is a challenging task. Several questions arise from this exercise. For example, what does the quality refer to? What makes a new article perceived as high quality by the editors/users? What aspects of an article correlate better with its perceived quality? Can we predict the quality of an article using linguistic features extracted from its content? These are the kind of questions we address in this paper. To this end, we propose a linguistic resource and assessment methodology to quantify the editorial quality of online news discourse. We argue that quality is too complex to be represented by a single number and should be instead decomposed into a set of simpler variables that capture the different linguistic and narrative aspects of online news. Thus, we depart from current literature and propose a multidimensional representation of quality. The first contribution of this paper is a taxonomy of 14 different content aspects that are associated with the editor-perceived quality of online news articles. The proposed 14 aspects are the result of an editorial study involving professional editors, journalists, and computational linguists. The second contribution of this paper is an expert-annotated corpus of online news articles obtained from a major news portal. This corpus is curated by the editors and journalists who annotated the articles with respect to the 14 aspects and to the general editorial quality. To confirm the independence and relevance of the proposed aspects, we perform a correlation analysis on this ground-truth to determine the strength of the associations between different aspects and article editorial quality. Our analysis shows that the editorperceived quality of an article exhibits a strong positive correlation with certain aspects, such as 1893 fluency and completeness, while it is weakly correlated with other aspects like subjectivity and polarity. As a baseline benchmark, we investigate the feasibility of predicting the quality aspects of an article using features extracted from the article only. Our findings indicate that article editorial quality prediction is a challenging task and that article quality can be predicted to a varying degree, depending on the feature space. The proposed aspects can be used to control the editorial quality with a Root Mean Squared Error (RMSE) of 0.398 on a 5-point Likert-scale. The rest of the paper is organised as follows. Next, we discuss existing literature in discourse analysis and text quality metrics. In Section 3, we present the aspects that we identified as potential indicators of article quality. Section 4 provides the details of our online news corpus targeting the aspects of editorial quality control. The results of the correlation analysis conducted between the identified aspects and article quality are presented in Section 4. In Section 5, we present a baseline benchmark to automatically infer individual aspects and editorial quality from online news. 2 Related Work A very recent work related to ours is (Gao et al., 2014), where the authors try to predict the interestingness of a news article for a user who is currently reading another news article. In our work, however, we try to predict the perceived quality of an article without using any context information other than the content of the article itself. Moreover, while the authors of (Gao et al., 2014) take a quite pragmatic approach to handle the problem, we follow a more principled approach and model the quality of a news article according to five orthogonal dimensions: readability, informativeness, style, topic, and sentiment. Work has been done in each one of these dimensions, but none has tackled the problem of modelling overall article quality in a comprehensive and articulated manner as we do. Below, we provide a survey of the previous work on these dimensions. The readability of a piece of text can be defined as the ease that the text can be processed and understood by a human reader (Richards and Schmidt, 2013; Zamanian and Heydari, 2012). The readability is usually associated with fluency and writing quality (Nenkova et al., 2010; Pitler and Nenkova, 2008). Even though there is a significant amount of research that targets readability, most work (Redish, 2000; Yan et al., 2006) were originally designed to measure the readability of school books and do not suit well to more complex reading materials, such as news articles, which form the focus of our work. The informativeness of a news article has been tackled from several different angles. In (Tang et al., 2003), news information quality was characterised by a set of nine aspects that were shown to have a good correlation with textual features. Catchy titles were shown to often lead to frustration, as the reader does not get the content that she expects (Louis and Nenkova, 2011). The task of assessing a news title’s descriptiveness is related to semantic text similarity and has been researched by the SemEval initiative (Agirre et al., 2013). Moreover, the completeness of a news article is an aspect that has been considered in the past by (Louis and Nenkova, 2014), which showed that reporting the news with adequate detail is key to provide the reader with enough information to grasp the entire story. The freshness of news information also sets the tone of the discourse: information can be novel to the average reader or it can be already known and be presented as a reference to the reader. The novelty of an article is essentially accomplished by either analysing previous articles (Gamon, 2006) or by relying on realtime data from social-media services (Phelan et al., 2009). The characterisation of the style of text compositions has been an active topic of research in communication sciences and humanities. An excellent example of the research done in this area is the influential work in (McNamara et al., 2009), where the authors found the best predictors of writing quality to be the syntactic complexity (number of words before the main verb), the diversity of words used by the author, and some other shallow features. In NLP, the writing style has been investigated in several contexts. A problem relevant to the one we addressed is the characterisation of an author’s writing style to predict the success of novels (Ashok et al., 2013). The authors investigated a wide range of complex linguistic features, ranging from simple unigrams to distribution of word categories, grammar rules, distribution of constituents, sentiment, and connotation. The comparison of novels and news articles revealed a great similar1894 ity in the writing style of novels and informative articles. The broadness of a news topic has an impact on the reader’s perceived quality of the article. A technical article is usually targeting niche groups of users and a popular article targets the masses. One of the few corpus (Louis and Nenkova, 2013) addressing quality was limited to the domain of scientific journalism, thus more technical articles. This corpus only considered news from the New York Times, thus contained already very good quality news. Two recent work investigated the feasibility of predicting news articles’ feature popularity in social media at cold start (Bandari et al., 2012; Arapakis et al., 2014a). In (Bandari et al., 2012), features extracted from the article’s content as well as additional meta-data was used to predict the number of times an article will be shared in Twitter after it went online. In (Arapakis et al., 2014a), a similar study was repeated to predict the popularity of a news article in social media using additional features obtained from external sources. Sentiment analysis concerns the subjectivity and the strength and sign of the opinions expressed in a given piece of text. In (Arapakis et al., 2014b), it was demonstrated that news articles exhibit considerable variation in terms of the sentimentality and polarity of their content. The work in (Phelan et al., 2009) has provided evidence that sentimentrelated aspects are important to profile and assess the quality of news articles. Sentiment analysis has been applied to news articles in other contexts as well (Godbole et al., 2007; Balahur et al., 2010). 3 Modeling News Article Quality The editorial control of news articles is an unsolved task that involves addressing a number of issues, such as identifying the characteristics of an effective text, determining what methods produce reliable and valid judgments for text quality, as well as selecting appropriate aspects of text evaluation that can be automated using machine learning methods. Underlying these tasks is a main theme: can we identify benchmarks for characterising news article quality? Therefore, there is a need for empirical work to identify the global and local textual features which will help us make an optimal evaluation of news articles. By doing so, we achieve two goals. On one hand, we can offer valuable insights with respect to what constitutes an engaging, good quality news article. On the other hand, we can identify benchmarks for characterising news article quality in an automatic and scalable way and, thus, predict poor writing before a news article is even published. This can help reduce greatly the burden of manual evaluation which is currently performed by professional editors. 3.1 Methodology The methodology described here provides a framework for characterising and modelling news article editorial quality. In our work, we follow a bottom-up approach and identify 14 different content aspects that are good predictors (as we demonstrate in Section 6.1) of news article quality. The aspects we identified are informed by input from news editors, journalists and computational linguists, and previous research in NLP and, particularly, the efforts in text summarisation (Bouayad-Agha et al., 2012), document understanding (Dang, 2005; Seki et al., 2006) and question answering (Surdeanu et al., 2008; Shtok et al., 2012). After discussing the editorial quality control with professionals, we gathered a set of heuristics and examined the literature for ways of designing quantitative measures to achieve our goal. We group the aspects under five headings: readability, informativeness, style, topic, and sentiment (see Fig. 1). Below, we provide a brief description of each aspect. 3.2 Readability High quality articles are written in a way that makes them easier to read. In our model, we include two different aspects related to readability (Pitler and Nenkova, 2008): fluency and conciseness. Fluency: Fluent articles are built from sentence to sentence, forming a coherent body of information. Consecutive sentences are meaningfully connected. Similarly, paragraphs are written in a logical sequence. Conciseness: Concise articles have a focus. Sentences contain information that is related to the main theme of the article. The same or similar information is tried to be not repeated. 3.3 Informativeness As a main reason for reading online news is to remain well-informed (Tang et al., 2003), informativeness of articles have an effect on their per1895 Quality Readability Style Topic Sentiment Informativeness Fluency Conciseness Descriptiveness Novelty Completeness Referencing Formality Richness Attractiveness Technicality Popularity Subjectivity Sentimentality Polarity Figure 1: A taxonomy of the identified aspects. ceived quality. In our model, we consider four different aspects related to informativeness: descriptiveness, novelty, completeness, and referencing. Descriptiveness: Descriptiveness indicates how well the title of an article reflects its main body content. Titles with low descriptiveness are often click baits (e.g., “You won’t believe what you will see”). Such titles may lead to dissatisfaction, as the provided news content usually does not meet the raised user expectation. Novelty: Novel articles provide new and valuable information to the readers. The provided information is unlikely to be known to an average reader. Completeness: Complete articles cover the topic in an adequate level of detail (Louis and Nenkova, 2014; Bouayad-Agha et al., 2012). A reader can satisfy her information need after reading such an article. Referencing: Referencing is about the degree to which the article references external sources (including other people’s opinions and related articles). Providing references allows the reader to access related information sources easily, (Gamon, 2006; van Dijk and Kintsch, 1983). 3.4 Style The language and aesthetics is also related to the article quality (McNamara et al., 2009; Ashok et al., 2013; Pavlick and Tetreault, 2016; Peterson et al., 2011). We consider three style-related aspects: formality, richness, and attractiveness. Formality: Formal articles are written by following certain writing guidelines. They are more likely to contain formal words and obey punctuation/grammar rules(Peterson et al., 2011). Richness: The vocabulary of rich articles is perceived as diverse and interesting by the readers. Rich articles are not written in a plain and straightforward manner. Attractiveness: Attractiveness measures the degree to which the title of an article raises curiosity in its readers. Attractive titles entice people to continue reading the main content of the article. 3.5 Topic Editors consider the nature of the article with respect to its target audience, i.e., according to the target audience (technical or popular) the other aspects may play a different role. We investigate two topic-related aspects: technicality and popularity. Technicality: Technical articles (Louis and Nenkova, 2013) usually require some effort to understand as well as previous knowledge on the topic. Examples of usually technical news topics include science and finance. Popularity: The popularity refers to the size of the audience who would be interested in the topic of the article (Bandari et al., 2012; Arapakis et al., 2014b). For example, while many readers are interested in reading about celebrities, few readers are interested in articles about anthropology. 3.6 Sentiment Finally, we consider the sentiments expressed in an article. Besides opinion articles (which are subjective by nature), many news may also convey a particular emotion. We evaluate three sentimentrelated aspects: subjectivity, sentimentality, and polarity. Subjectivity: Subjective articles tend to contain opinions, preferences, or possibilities. There are relatively few factual statements. Sentimentality: Sentimentality is a measure of the total magnitude of positive or negative statements made in the article regarding an object or an event. Highly sentimental articles include relatively few neutral statements. Polarity: Polarity indicates the overall sign of the sentiments expressed in the article (Arapakis et al., 2014a). Articles with positive (negative) polarity include relatively more statements with positive (negative) sentiment. 4 Corpus: Editorial Quality Control Our goal is to identify proxies of news article quality that can be learned and predicted in an automatic and scalable manner. To identify these proxies, we rely on the domain knowledge and human intuition of expert judges, whom we employ in a rigorous, crowdsourcing-based evaluation for gen1896 erating a ground-truth dataset. Through an editorial study we create an in-domain, annotated news corpus that allows us to learn predictive models which can estimate accurately the perceived quality of news articles. 4.1 Online News Articles Our analysis was conducted on a dataset consisting of 13, 319 news articles taken from a major news portal2. We opted for a single news portal to be able to extract features that are consistent across all news articles. The dataset was constructed by crawling news articles over a period of two weeks. During the crawling period, we connected to the RSS news feed of the portal every 15 minutes and fetched newly published articles written in English. The content of the discovered articles was then downloaded from the portal. Each article is identified by its unique URI and stored in a database, along with some meta-data, such as article’s genre, its publication date, and its HTML content. We applied further filtering on the initial set of 13,319 news articles. The word count distribution of the articles followed a bimodal pattern, with the bulk of the articles located around a mean value of 447.5. Using this value as a reference point, we removed articles that contain less than 150 or more than 800 words. We then sampled a smaller set of articles such that each of the most frequent 15 genres have at least 65 articles in the sample. This left us with 1,043 new articles, out of which a randomly selected set of 561 articles were used in the editorial study. The selected news articles were preprocessed before the editorial study. The preprocessing was performed in two steps. First, we removed the boilerplate of HTML pages and extracted the main body text of news articles, using Boilerpipe (Kohlsch¨utter et al., 2010). Second, we segmented the body text into sentences and paragraphs. For sentence segmentation, we used the Stanford CoreNLP library, which includes a probabilistic parser (Klein and Manning, 2003; Mihalcea and Csomai, 2007). For each news article we generated a body- and sentence- level annotation form (see example in the supplementary notes). 4.2 Annotations of Editorial Quality Aspects For our editorial study, we employed ten expert judges (male = 4, female = 6) who had a back2Yahoo! News at http://www.yahoo.com/news. 62.1% 31.1% 3.2% 3.4% 0% 20% 40% 60% 80% 100% Fluency Conciseness Descriptiveness Novelty Completeness Referencing Formality Richness Attractiveness Technicality Popularity Subjectivity Positivity Negativity Total Full agreement 1-level disag. 2-level disag. 3-level disag. Figure 2: Annotators agreement. ground in computational linguistics, journalism, or were media monitoring experts. The expert judges were either native English speakers or were proficient with the English language. The expert judges assessed a total of 561 news articles on 15 measures (14 aspects and the main quality measure), using a 5-point Likert scale, where low and high scores suggest weak or strong presence of the assessed measure, respectively. The annotation took place remotely, and each expert judge could annotate up to ten news articles per day (this threshold was set to ensure a high quality of annotation), and each article was annotated by one expert judge and by one of the authors of this paper. Prior to that, there was a pilot session were each expert judge was asked to become familiar with the quality criteria and annotate three trial news articles. Next, a meeting (physical or online) was arranged and the authors discussed with the expert judge the rationale behind assigning the scores, and appropriate corrections and recommendations were made. This step ensured that we had disambiguated any questions prior to the editorial study and also assured that expert judges followed the same scoring procedure. The compensation for annotating was 10eper article. The annotated corpus is publicly available.3 Fig. 2 illustrates the details of the overall annotations agreement. We can see that annotations agree on 62.1% of the articles, on 65.5% they vary 3http://novasearch.org/datasets/. 1897 only 1-point and in 96.6% they vary 2 points in the 5-point Likert-scale. These results are quite satisfying and show a good level of agreement and consistency across all aspects. 4.3 Corpus Statistics Table 1 shows the mean (M) and standard deviation (SD) values for five different distributions (number of characters, words, unique words, entities, and sentences) and four different subsets of the corpus. The subsets contain all articles, highquality articles (labels 4 and 5), medium-quality articles (label 3), or low-quality articles (labels 1 and 2). The last three subsets contain 84, 298, and 179 news articles, respectively. According to these numbers, the article quality follows an unbalanced distribution: about half of the articles are labeled as medium quality, and there are about two times more low-quality articles than high-quality articles. According to Table 1, there is a clear difference between distributions for the high- and low- quality articles. In general, we observe that higher-quality articles are relatively longer (e.g., more words or sentences), on average. 5 Aspects Correlation Analysis To identify which aspects of a news article are better discriminants of its quality, we perform a correlation analysis. Given that we are looking at ordinal data that violates parametric assumptions, we compute the Spearman’s rank correlation coefficients (rs) between the aspects’ scores and the news article quality that we acquired from our ground truth. The motivation behind this analysis is to get a first intuition into the aspects’ effectiveness to act as quality predictors, by understanding how they are associated to news article quality. In Table 2, we report several statistically significant correlations between the different aspects. Given that our correlation analysis involves multiple pairwise comparisons, we need to correct the level of significance for each test such that the overall Type I error rate (α) across all comparisons remains at .05. Given that the Bonferroni correction is too conservative in the Type I error rate, we opt for the more liberal criterion proposed by Benjamini and Hochberg (Benjamini and Hochberg, 1995; Benjamini and Hochberg, 2000) and compute the critical p-value for every pairwise comparisons as pcrit = j k α, (1) where j is the index of all pairwise comparison pvalues, listed in an ascending order, and k is the number of comparisons. If we consider Cohen’s conventions for the interpretation of effect size, we observe that most of the correlation coefficients shown in Table 2 represent sizeable effects, which range from small (±.1) to large (±.5). For example, completeness is highly correlated with quality (rs = .70) while polarity is the least correlated with quality (rs = .05). In addition, Table 2 does not provide any evidence of multicollinearity since none of the aspects (with the exception of quality) are significantly highly correlated (rs > .80). 6 Predicting Editorial Quality 6.1 Predicting EQ with the Aspects In this section, we demonstrate the predictive characteristics of the proposed aspects (Section 3) with respect to news article quality. We formulate the prediction problem as a regression problem, and conduct a 10-fold cross validation to estimate the regression model. For our regression task we use a Generalised Linear Model (GLM) via penalized maximum likelihood (Friedman et al., 2010). The regularisation path is computed for the lasso or elasticnet penalty at a grid of values for the regularisation parameter lambda. The GLM solves the following problem min β0,β 1 N N X i=1 wil(yi, β0+βT xi)+λ[(1−α)∥β∥2 2 2 +α∥β∥1, ], (2) over a grid of values of λ covering the entire range. Here l(y, η) is the negative log-likelihood contribution for observation i. The elastic-net penalty is controlled by α, and bridges the gap between lasso (α = 1, the default) and ridge (α = 0). The tuning parameter λ controls the overall strength of the penalty. It is known that the ridge penalty shrinks the coefficients of correlated predictors towards each other while the lasso tends to pick one of them and discard the others, which makes it more robust against predictor collinearity and overfitting. We used the values that minimise RMSE, i.e., α = 0.95 and λ = 0.01. In Table 3, we see the coefficients of the final GLM model which are to be interpreted in the same manner as a Cox model. A positive regression coefficient for an explanatory variable means that the variable is associated with a higher risk of an event. In our case, all coefficients are positive, being completeness, fluency and richness the ones 1898 Table 1: Statistics for the annotated news corpus (M ± SD values) All High quality Medium quality Low quality Characters 2490.28 ± 1900.95 4321.92 ± 2258.51 2698.94 ± 1641.85 1290.07 ± 1166.65 Words 413.03 ± 318.63 717.87 ± 386.08 447.46 ± 274.88 213.76 ± 193.12 Unique words 167.29 ± 110.25 269.08 ± 122.14 180.27 ± 95.72 98.29 ± 77.09 Entities 18.45 ± 14.43 23.85 ± 15.09 19.43 ± 11.03 14.29 ± 17.59 Sentences 20.67 ± 17.89 35.27 ± 24.92 21.76 ± 14.02 12.03 ± 14.42 Table 2: Correlations between different aspects in the ground-truth data Conc. Desc. Nov. Comp. Ref. Form. Rich. Attr. Tech. Pop. Subj. Sent. Pol. Qual. Fluency .61∗∗ .38∗∗ .34∗∗ .57∗∗ .37∗∗ .40∗∗ .53∗∗ .41∗∗ .15∗∗ .27∗∗ .12∗∗ .11∗∗ .03 .66∗∗ Conciseness .33∗∗ .32∗∗ .38∗∗ .28∗∗ .41∗∗ .39∗∗ .30∗∗ .24∗∗ .25∗∗ −.00 .08 .01 .47∗∗ Descriptiveness .18∗∗ .32∗∗ .23∗∗ .23∗∗ .19∗∗ .13∗∗ .13∗∗ .17∗∗ .00∗ .09 .00 .37∗∗ Novelty .39∗∗ .40∗∗ .44∗∗ .35∗∗ .37∗∗ .16∗∗ .32∗∗ .05 .25∗∗ −.05 .41∗∗ Completeness .48∗∗ .38∗∗ .51∗∗ .39∗∗ .30∗∗ .26∗∗ .18∗∗ .20∗∗ .03 .70∗∗ References .51∗∗ .35∗∗ .33∗∗ .30∗∗ .29∗∗ .27∗∗ .44∗∗ .00 .52∗∗ Formality .43∗∗ .30∗∗ .46∗∗ .25∗∗ .01 .31∗∗ −.08 .47∗∗ Richness .50∗∗ .25∗∗ .35∗∗ .24∗∗ .15∗∗ .04 .63∗∗ Attractiveness .15∗∗ .55∗∗ .28∗∗ .23∗∗ −.02 .52∗∗ Technicality .22∗∗ .16∗∗ .22∗∗ .11∗ .30∗∗ Popularity .28∗∗ .24∗∗ .02 .41∗∗ Subjectiveness .42∗∗ .11∗ .23∗∗ Sentimemtality −.13∗ .27∗∗ Polarity .05 Significance levels (two-tailed) are as follows: ∗:< .01; ∗∗:< .001. Table 3: The coefficients of the final GLM model. The intercept value is 2.9103. Group Aspects Coefficients Readability Fluency .1730 Conciseness .0372 Informativeness Completeness .2062 Descriptiveness .0723 Referencing .0343 Novelty Style Richness .1192 Formality .0602 Attractiveness .0515 Topic Popularity .0578 Technicality .0047 Sentiment Subjectivity Polarity Sentimentality showing a higher relation to the overall editorial quality. Next, we replicate our regression experiments for the GLM regression model, but this time we apply a leave-one-aspect-out method, to examine the relative importance of each aspect in explaining our predicted variable, i.e., the news article quality. To this end, we evaluate the 14 regression models, each one with out one of the aspects. The goal is to verify how prediction is affected by each individual quality aspect. Table 4: Average performance across all ten folds for the GL model and for different feature sets. Group Aspects RMSE RRSE All groups All aspects .3984 Readability w/o Fluency .4158 -4.36% w/o Conciseness .3984 .00% Informative. w/o Completeness .4233 -6.25% w/o Referencing .4000 -.40% w/o Descriptiveness .3999 -.37% w/o Novelty .3981 -.07% Style w/o Richness .4081 -2.43% w/o Attractiveness .4009 -.62% w/o Formality .3990 -.15% Topic w/o Popularity .4003 -.47% w/o Technicality .3976 .20% Sentiment w/o Subjectivity .3974 .25% w/o Polarity .3984 -.10% w/o Sentimentality .3983 .02% To compare the performance of our GLM regression model against the baseline method (with all quality aspects), we compute the Root Mean Squared Error (RMSE), given by RMSE = sPN i=1 (ˆy −yi)2 N (3) where ˆy is the sample mean and yi is the i-th estimate. However, while regression results give an idea of the prediction quality of the models they do 1899 not quantify the size of the difference of their performance. We, therefore, also compute the Root Relative Squared Error (RRSE) metric as it provides a good indication of any relative improvement over the baseline methods, given by RRSE = 1 − RMSEGLM RMSEBaseline . (4) Table 4 shows the RMSE and RRSE, with respect to the GLM regression model trained on all the features. These results show that completeness, fluency and richness are the aspects that most affect RMSE when they are missing from the full model. 6.2 Automatic Prediction of EQ We examined a baseline model (BaselineM) that always predicts the mean value and a baseline GLM model (BaselineShallow) trained on shallow features, to automatically predict the editorial quality. Shallow or lexical features are commonly used in traditional readability metrics, which are based on the analysis of superficial text properties. Flesh-Kincaid Grade Level (Flesch, 1979; Franc¸ois and Fairon, 2012), SMOG (McLaughlin, 1969), and Gunning Fog (Gunning, 1952) are some examples of readability metrics. The simplicity of these features makes them an attractive solution compared to computationally more expensive features, such as syntactic (Feng et al., 2010). However, as Shriver (Schriver, 1989) points out, the readability metrics can be useful when used as gross index of readability. For our baseline, we consider the Flesh Kincaid, Coleman Liau, ARI, RIX, Gunning Fog, SMOG, LIX features. In Table 5, we report the average performance of the GLM regression model, BaselineM, and BaselineShallow across all folds. We note that our GLM regression model improves the RMSE by at least 40%, compared to both baselines. Finally, as a reference for future research with the proposed corpus, we trained GLM regression models to predict each aspect individually. Table 6 presents the RMSE for each aspect, for two different sets of feature: a standard BoW and the shallow features described previously, as well as the BaselineM. Despite the simplicity of the features, we can see that the aspects can be inferred from the articles. In particular, the model trained on the BoW features achieves an RMSE that is very close to that of the BaselineM, whereas the Table 5: Average performance across all ten folds for the GLM, BaselineM and BaselineShallow. Method RMSE RRSE BaselineM 0.7048 43.47% BaselineShallow 0.8937 55.41% GLM 0.3984 Table 6: Average performance across all ten folds for the GL model and for different feature sets. Aspects BoW Shallow BaselineM Fluency 1.1571 1.1181 1.1462 Conciseness 1.2622 1.1968 1.2456 Completeness .8408 .7945 .8130 Referencing .7047 .6613 .7048 Descriptiveness .9260 .8730 .9073 Novelty .7994 .7607 .7797 Richness .9866 .9454 .9568 Attractiveness .7048 .6702 .6907 Formality .7025 .6691 .6920 Popularity .8329 .7825 .8250 Technicality .7923 .7409 .7907 Subjectivity .8750 .8283 .9094 Polarity .8109 .7780 .8009 Sentimentality .8170 .7668 .8046 model trained on the shallow features outperforms all other models. 7 Conclusions In this paper, we proposed an annotated corpus for controlling the editorial quality of online news through 14 aspects related to editors perceived quality of news articles. To this end, we performed an editorial study with expert judges either in computational linguistics, journalism, or media monitoring experts. The judges assessed a total of 561 news articles with respect to 14 aspects. The study produced valuable insights. One important finding was that high quality articles share a significant amount of variability with several of the proposed aspects, which supports the claim that the proposed aspects may characterise news article quality in an automatic and scalable way. Another finding was that fluency, completeness and richness are the aspects that best correlate with quality, while technicality, subjectivity and polarity aspects show a poor correlation with quality. This shows that the text comprehension and writing style are aspects that are more relevant than sentiment. Later, we showed that using the entire 1900 set of 14 aspects we could predict the text quality with an RMSE of only 0.400 in a 5-point Likertscale. This renders a very effective decomposition of news article quality into the 14 aspects. As future work, we plan to investigate other linguistic representations that can improve the automated extraction of the proposed aspects to better predict the article’s perceived quality. Acknowledgments The first three authors were supported by MULTISENSOR project, partially funded by the European Commission, under the contract number FP7-610411. The fourth author was partially supported by FCT/MCTES through the project NOVA LINCS Ref. UID/CEC/04516/2013. References Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalezagirre, and Weiwei Guo. 2013. sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity. In In *SEM 2013: The Second Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics. Ioannis Arapakis, B.Barla Cambazoglu, and Mounia Lalmas. 2014a. On the feasibility of predicting news popularity at cold start. In LucaMaria Aiello and Daniel McFarland, editors, Social Informatics, volume 8851 of Lecture Notes in Computer Science, pages 290–299. Springer International Publishing. Ioannis Arapakis, Mounia Lalmas, Berkant Barla Cambazoglu, Mari-Carmen Marcos, and Joemon M. Jose. 2014b. User engagement in online news: Under the scope of sentiment, interest, affect, and gaze. JASIST, 65(10):1988–2005. Vikas Ganjigunte Ashok, Song Feng, and Yejin Choi. 2013. Success with style: Using writing style to predict the success of novels. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Alexandra Balahur, Ralf Steinberger, Mijail Kabadjov, Vanni Zavarella, Erik van der Goot, Matina Halkia, Bruno Pouliquen, and Jenya Belyaeva. 2010. Sentiment analysis in the news. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, Proceedings of the Seventh International Conference on Language Resources and Evaluation, Valletta, Malta, may. European Language Resources Association (ELRA). Roja Bandari, Sitaram Asur, and Bernardo A Huberman. 2012. The pulse of news in social media: Forecasting popularity. In ICWSM, pages 26–33. Y. Benjamini and Y. Hochberg. 1995. Controlling the false discovery rate - a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57(1):289–300. Y. Benjamini and Y. Hochberg. 2000. On the adaptive control of the false discovery fate in multiple testing with independent statistics. Journal of Educational and Behavioral Statistics, 25(1):60–83. Nadjet Bouayad-Agha, Gerard Casamayor, Simon Mille, and Leo Wanner. 2012. Perspective-oriented generation of football match summaries: Old tasks, new challenges. ACM Transactions on Speech and Language Processing (TSLP), 9(2):3. Hoa Trang Dang. 2005. Overview of duc 2005. In Proceedings of the document understanding conference, volume 2005, pages 1–12. Lijun Feng, Martin Jansche, Matt Huenerfauth, and No´emie Elhadad. 2010. A Comparison of Features for Automatic Readability Assessment. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters (COLING), COLING ’10, pages 276–284, Stroudsburg, PA, USA. Association for Computational Linguistics. Rudolf Franz Flesch. 1979. How to write plain English: A book for lawyers and consumers. Harpercollins. Thomas Franc¸ois and C´edrick Fairon. 2012. An AI readability formula for French as a foreign language. In Proceedings of the Conference on Empirical Methods in Natural Language (EMNLP), pages 466–477. Association for Computational Linguistics. Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2010. Regularization paths for generalized linear models via coordinate descent. Journal of statistical software, 33(1):1. Michael Gamon. 2006. Graph-based text representation for novelty detection. In Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing, pages 17–24. Association for Computational Linguistics. Jianfeng Gao, Patrick Pantel, Michael Gamon, Xiaodong He, Li Deng, and Yelong Shen. 2014. Modeling interestingness with deep neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Namrata Godbole, Manjunath Srinivasaiah, and Steven Skiena. 2007. Large-scale sentiment analysis for news and blogs. In Proceedings of the International Conference on Weblogs and Social Media. Robert Gunning. 1952. The Technique of Clear Writing. McGraw-Hill. 1901 Dan Klein and Christopher D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems, volume 15. MIT Press. Christian Kohlsch¨utter, Peter Fankhauser, and Wolfgang Nejdl. 2010. Boilerplate detection using shallow text features. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM ’10, pages 441–450, New York, NY, USA. ACM. Annie Louis and Ani Nenkova. 2011. Text specificity and impact on quality of news summaries. In Proceedings of the Workshop on Monolingual Text-ToText Generation, pages 34–42. Association for Computational Linguistics. Annie Louis and Ani Nenkova. 2013. A corpus of science journalism for analyzing writing quality. Dialogue & Discourse, 4(2):87–117. Annie Louis and Ani Nenkova. 2014. Verbose, laconic or just right: A simple computational model of content appropriateness under length constraints. pages 636–644. G Harry McLaughlin. 1969. SMOG grading: A new readability formula. Journal of reading, JSTOR, 12(8):639–646. Danielle S McNamara, Scott A Crossley, and Philip M McCarthy. 2009. Linguistic features of writing quality. Written Communication. Rada Mihalcea and Andras Csomai. 2007. Wikify!: Linking documents to encyclopedic knowledge. In Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management, CIKM ’07, pages 233–242, New York, NY, USA. ACM. Ani Nenkova, Jieun Chae, Annie Louis, and Emily Pitler. 2010. Structural Features for Predicting the Linguistic Quality of Text. Proceddings of the Empirical Methods in Natural Language Generation (EMNLP), 5790:222–241. Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. Transactions of the Association for Computational Linguistics, 4:61–74. Kelly Peterson, Matt Hohensee, and Fei Xia. 2011. Email formality in the workplace: A case study on the enron corpus. In Proceedings of the Workshop on Languages in Social Media, pages 86–95. Association for Computational Linguistics. Owen Phelan, Kevin McCarthy, and Barry Smyth. 2009. Using twitter to recommend real-time topical news. In Proceedings of the third ACM conference on Recommender systems, pages 385–388. ACM. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08. Janice Redish. 2000. Readability formulas have even more limitations than Klare discusses. ACM Journal of Computer Documentation (JCD), 24(3):132–137. Jack C Richards and Richard W Schmidt. 2013. Longman dictionary of language teaching and applied linguistics, volume 78. Routledge. Karen A Schriver. 1989. Evaluating text quality: The continuum from text-focused to readerfocused methods. Professional Communication, IEEE Transactions on, 32(4):238–255. Yohei Seki, Koji Eguchi, Noriko Kando, and Masaki Aono. 2006. Opinion-focused summarization and its analysis at duc 2006. In Proceedings of the Document Understanding Conference (DUC), pages 122– 130. Anna Shtok, Gideon Dror, Yoelle Maarek, and Idan Szpektor. 2012. Learning from the past: answering new questions with past answers. In Proceedings of the 21st international conference on World Wide Web, pages 759–768. ACM. Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2008. Learning to rank answers on large online qa collections. In ACL, volume 8, pages 719– 727. Rong Tang, Kwong Bor Ng, Tomek Strzalkowski, and Paul B Kantor. 2003. Automatically predicting information quality in news documents. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume of the Proceedings of HLT-NAACL 2003–short papers-Volume 2, pages 97–99. Association for Computational Linguistics. Teun A. van Dijk and W. Kintsch. 1983. Strategies of discourse comprehension. New York: Academic Press. Xin Yan, Dawei Song, and Xue Li. 2006. Conceptbased document readability in domain specific information retrieval. In Proceedings of the 15th ACM Conference on Information and Knowledge Management (CIKM), pages 540–549. ACM. Mostafa Zamanian and Pooneh Heydari. 2012. Readability of texts: State of the art. Theory and Practice in Language Studies, 2(1):43–53. 1902
2016
178
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1903–1912, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation Ander Barrena and Aitor Soroa and Eneko Agirre IXA NLP Group UPV/EHU University of the Basque Country Donostia, Basque Country ander.barrena,a.soroa,[email protected] Abstract Named Entity Disambiguation (NED) algorithms disambiguate mentions of named entities with respect to a knowledge-base, but sometimes the context might be poor or misleading. In this paper we introduce the acquisition of two kinds of background information to alleviate that problem: entity similarity and selectional preferences for syntactic positions. We show, using a generative N¨aive Bayes model for NED, that the additional sources of context are complementary, and improve results in the CoNLL 2003 and TAC KBP DEL 2014 datasets, yielding the third best and the best results, respectively. We provide examples and analysis which show the value of the acquired background information. 1 Introduction The goal of Named Entity Disambiguation (NED) is to link each mention of named entities in a document to a knowledge-base of instances. The task is also known as Entity Linking or Entity Resolution (Bunescu and Pasca, 2006; McNamee and Dang, 2009; Hachey et al., 2012). NED is confounded by the ambiguity of named entity mentions. For instance, according to Wikipedia, Liechtenstein can refer to the micro-state, several towns, two castles or a national football team, among other instances. Another ambiguous entity is Derbyshire which can refer to a county in England or a cricket team. Most NED research use knowledge-bases derived or closely related to Wikipedia. For a given mention in context, NED systems (Hachey et al., 2012; Lazic et al., 2015) typically rely on two models: (1) a mention module returns possible entities which can be referred to by the mention, ordered by prior probabilities; (2) a conFigure 1: Two examples where NED systems fail, motivating our two background models: similar entities (top) and selectional preferences (bottom). The logos correspond to the gold label. text model orders the entities according to the context of the mention, using features extracted from annotated training data. In addition, some systems check whether the entity is coherent with the rest of entities mentioned in the document, although (Lazic et al., 2015) shows that the coherence module is not required for top performance. Figure 1 shows two real examples from the development dataset which contains text from News, where the clues in the context are too weak or misleading. In fact, two mentions in those examples (Derbyshire in the first and Liechtenstein in the second) are wrongly disambiguated by a bag-of1903 words context model. In the first example, the context is very poor, and the system returns the county instead of the cricket team. In order to disambiguate it correctly one needs to be aware that Derbyshire, when occurring on News, is most notably associated with cricket. This background information can be acquired from large News corpora such as Reuters (Lewis et al., 2004), using distributional methods to construct a list of closely associated entities (Mikolov et al., 2013). Figure 1 shows entities which are distributionally similar to Derbyshire, ordered by similarity strength. Although the list might say nothing to someone not acquainted with cricket, all entities in the list are strongly related to cricket: Middlesex used to be a county in the UK that gives name to a cricket club, Nottinghamshire is a county hosting two powerful cricket and football teams, Edgbaston is a suburban area and a cricket ground, the most notable team to carry the name Glamorgan is Glamorgan County Cricket Club, Trevor Barsby is a cricketer, as are all other people in the distributional context. When using these similar entities as context, our system does return the correct entity for this mention. In the second example, the words in the context lead the model to return the football team for Liechtenstein, instead of the country, without being aware that the nominal event “visit to” prefers locations arguments. This kind of background information, known as selections preferences, can be easily acquired from corpora (Erk, 2007). Figure 1 shows the most frequent entities found as arguments of “visit to” in the Reuters corpus. When using these filler entities as context, the context model does return the correct entity for this mention. In this article we explore the addition of two kinds of background information induced from corpora to the usual context of occurrence: (1) given a mention we use distributionally similar entities as additional context; (2) given a mention and the syntactic dependencies in the context sentence, we use the selectional preferences of those syntactic dependencies as additional context. We test their contribution separately and combined, showing that they introduce complementary information. Our contributions are the following: (1) we introduce novel background information to provide additional disambiguation context for NED; (2) we integrate this information in a Bayesian generative NED model; (3) we show that similar entities are useful when no textual context is present; (4) we show that selectional preferences are useful when limited context is present; (5) both kinds of background information help improve results of a NED system, yielding the state-of-the-art in the TAC KBP DEL 2014 dataset and getting the third best results in the CoNLL 2003 dataset; (6) we release both resources for free to facilitate reproducibility. 1 The paper is structured as follows. We first introduce the method to acquire background information, followed by the NED system. Section 4 presents the evaluation datasets, Section 5 the development experiments and Section 6 the overall results. They are followed by related work, error analysis and the conclusions section. 2 Acquiring background information We built our two background information resources from the Reuters corpus (Lewis et al., 2004), which comprises 250K documents. We chose this corpus because it is the one used to select the documents annotated in one of our gold standards (cf. Section 4). The documents in this corpus are tagged with categories, which we used to explore the influence of domains. The documents were processed using a publicly available NLP pipeline, Ixa-pipes,2 including tokenization, lematization, dependency tagging and NERC. 2.1 Similar entity mentions Distributional similarity is known to provide useful information regarding words that have similar co-occurrences. We used the popular word2vec3 tool to produce vector representations for named entities in the Reuters corpus. In order to build a resource that yields similar entity mentions, we took all entity-mentions detected by the NERC tool and, if they were multi word entities, joined them into a single token replacing spaces with underscores, and appended a tag to each of them. We run word2vec with default parameters on the preprocessed corpus. We only keep the vectors for named entities, but note that the corpus contains 1http://ixa2.si.ehu.es/anderbarrena/ 2016ACL_files.zip 2http://ixa2.si.ehu.es/ixa-pipes/ 3https://code.google.com/archive/p/ word2vec/ 1904 both named entities and other words, as they are needed to properly model co-occurrences. Given a named entity mention, we are thus able to retrieve the named entity mentions which are most similar in the distributional vector space. All in all, we built vectors for 95K named entity mentions. Figure 1 shows the ten most similar named entities for Derbyshire according to the vectors learned from the Reuters corpus. These similar mentions can be seen as a way to encode some notion of a topic-related most frequent sense prior. 2.2 Selectional Preferences Selectional preferences model the intuition that arguments of predicates impose semantic constraints (or preferences) on the possible fillers for that argument position (Resnik, 1996). In this work, we use the simplest model, where the selectional preference for an argument position is given by the frequency-weighted list of fillers (Erk, 2007). We extract dependency patterns as follows. After we parse Reuters with the Mate dependency parser (Bohnet, 2010) integrated in IxaPipes, we extract (H D −→C) dependency triples, where D is one of the Subject, Object or Modifier dependencies4 (SBJ, OBJ, MOD, respectively), H is the head word and C the dependent word. We extract fillers in both directions, that is, the set of fillers in the dependent position {C : (H D −→C)}, but also the fillers in the head position {H : (H D −→C)}. Each such configuration forms a template, (H D −→ ∗) and (∗D −→C). In addition to triples (single dependency relations) we also extracted tuples involving two dependency relations in two flavors: (H D1 −−→C1 D2 −−→ C2) and (C1 D1 ←−−H D2 −−→C2). Templates and fillers are defined as done for single dependencies, but, in this case, we extract fillers in any of the three positions and we thus have three different templates for each flavor. As dependency parsers work at the word level, we had to post-process the output to identify whether the word involved in the dependency was part of a named entity identified by the NERC algorithm. We only keep tuples which involve at least one name entity. Some examples for the three kinds of tuples follow, including the frequency of 4Labels are taken from the Penn Treebank https://www.ling.upenn.edu/courses/Fall_ 2003/ling001/penn_treebank_pos.html occurrence, with entities shown in bold: (beat SBJ −−−→Australia) 141 (refugee MOD −−−−→Hutu) 1681 (visit MOD −−−−→to MOD −−−−→United States) 257 (match MOD −−−−→against MOD −−−−→Manchester United) 12 (Spokesman SBJ ←−−−tell OBJ −−−→Reuters) 1378 (The Middle East MOD ←−−−−process MOD −−−−→peace) 1126 When disambiguating a mention of a named entity, we check whether the mention occurs on a known dependency template, and we extract the most frequent fillers of that dependency template. For instance, the bottom example in Figure 1 shows how Liechtenstein occurs as a filler of the template (visit MOD −−−−→to MOD −−−−→*), and we thus extract the selectional preference for this template, which includes, in the figure 1, the ten most frequent filler entities. We extracted more than 4.3M unique tuples from Reuters, producing 2M templates and their respective fillers. The most frequent dependency was MOD, followed by SUBJ and OBJ 5 The selectional preferences include 400K different named entities as fillers. Note that selectional preferences are different from dependency path features. Dependency path features refer to features in the immediate context of the entity mention, and are sometimes added as additional features of supervised classifiers. Selectional preferences are learnt collecting fillers in the same dependency path, but the fillers occur elsewhere in the corpus. 3 NED system Our disambiguation system is a N¨aive Bayes model as initially introduced by (Han and Sun, 2011a), but adapted to integrate the background information extracted from the Reuters corpus. The model is trained using Wikipedia,6 which is also used to generate the entity candidates for each mention. Following usual practice, candidate generation is performed off-line by constructing an association between strings and Wikipedia articles, which we call dictionary. The association is performed using article titles, redirections, disambiguation pages, and textual anchors. Each association is scored with the number of times the string was 51.5M, 0.8M and 0.7M respectively 6We used a dump from 25-5-2011. This dump is close in time to annotations of the datasets used in the evaluation (c.f. Section 4) 1905 e s c csp csim Figure 2: Dependencies among variables in our Bayesian network. used to refer to the article (Agirre et al., 2015). We also use Wikipedia to extract training mention contexts for all possible candidate entities. Mention contexts for an entity are built by collecting a window of 50 words surrounding any hyper link pointing to that entity. Both training and test instances are preprocessed the same way: occurrence context is tokenized, multi-words occurring in the dictionary are collapsed as a single token (longest matches are preferred). All occurrences of the same target mention in a document are disambiguated collectively, as we merge all contexts of the multiple mentions into one, following the one-entity-perdiscourse hypothesis (Barrena et al., 2014). The N¨aive Bayes model is depicted in Figure 2. The candidate entity e of a given mention s, which occurs within a context c, is selected according to the following formula: e = arg max e P(s, c, csp, csim, e) = arg max e P(e)P(s|e)P(c|e)P(csp|e, s)P(csim|e, s) The formula combines evidences taken from five different probabilities: the entity prior p(e), the mention probability p(s|e), the textual context p(c|s), the selectional preferences P(csp|e, s) and the distributional similarity P(csim|e, s). This formula is also referred to as the “Full model”, as we also report results of partial models which use different combinations of the five probability estimations. Entity prior P(e) represents the popularity of entity e, and is estimated as follows: P(e) ∝f(∗, e) + 1 f(∗, ∗) + N where f(∗, e) is the number of times the entity e is referenced within Wikipedia, f(∗, ∗) is the total number of entity mentions and N is the number of distinct entities in Wikipedia. The estimation is smoothed using the add-one method. Mention probability P(s|e) represents the probability of generating the mention s given the entity e, and is estimated as follows: P(s|e) ∝θ f(s, e) f(∗, e) + (1 −θ)f(s, ∗) f(∗, ∗) where f(s, e) is the number of times mention s is used to refer to entity e and f(s, ∗) is the number of times mention s is used as anchor. We set the θ hyper-parameter to 0.9 according to developments experiments in the CoNLL testa dataset (cf. Section 5.5). Textual context P(c|e) is the probability of entity e generating the context c = {w1, . . . , wn}, and is expressed as: P(c|e) = Y w∈c P(w|e) 1 n where 1 n is a correcting factor that compensates the effect of larger contexts having smaller probabilities. P(w|e), the probability of entity e generating word w, is estimated following a bag-of-words approach: P(w|e) ∝λc(w, e) c(∗, e) + (1 −λ)f(w, ∗) f(∗, ∗) where c(w, e) is the number of times word w appears in the mention contexts of entity e, and c(∗, e) is the total number of words in the mention contexts. The term in the right is a smoothing term, calculated as the likelihood of word w being used as an anchor in Wikipedia. λ is set to 0.9 according to development experiments done in CoNLL testa. Distributional Similarity P(csim|e, s) is the probability of generating a set of similar entity mentions given an entity mention pair. This probability is calculated and estimated in exactly the same way as the textual context above, but replacing the mention context c with the mentions of the 30 most similar entities for s (cf. Section 2.1). Selectional Preferences P(csp|e, s) is the probability of generating a set of fillers csp given an entity and mention pair. The probability is again analogous to the previous ones, but using the filler entities of the selectional preferences of s instead of the context c (cf. Section 2.2). In our experiments, we select the 30 most frequent fillers for each selectional preferences, concatenating the filler list when more than one selectional preference is applied. 1906 3.1 Ensemble model In addition to the Full model, we created an ensemble system that combines the probabilities described above using a weighting schema, which we call “Full weighted model”. In particular, we add an exponent coefficient to the probabilities, thus allowing to control the contribution of each model. arg max e P(e)αP(s|e)β P(c|e)γP(csp|e, s)δP(csim|e, s)ω We performed an exhaustive grid search in the interval (0, 1) for each of the weights, using a step size of 0.05, and discarding the combinations whose sum is not one. Evaluation of each combination was performed in the CoNLL testa development set, and the best combination was applied in the test sets.7 4 Evaluation Datasets The evaluation has been performed on one of the most popular datasets, the CoNLL 2003 namedentity disambiguation dataset, also know as the AIDA or CoNLL-Yago dataset (Hoffart et al., 2011). It is composed of 1393 news documents from Reuters Corpora where named entity mentions have been manually identified. It is divided in three main parts: train, testa and testb. We used testa for development experiments, and testb for the final results and comparison with the state-ofthe-art. We ignored the training part. In addition, we also report results in the Text Analysis Conference 2014 Diagnostic Entity Linking task dataset (TAC DEL 2014).8 The gold standard for this task is very similar to the CoNLL dataset, where target named entity mentions have been detected by hand. Through the beginning of the task (2009 to 2013) the TAC datasets were query-driven, that is, the input included a document and a challenging and sometimes partial target-mention to disambiguate. As this task also involved mention detection and our techniques are sensitive to mention detection errors, we preferred to factor out that variation and focus on the 2014. The evaluation measure used in this paper is micro-accuracy, that is, the percentage of linkable mentions that the system disambiguates correctly, as widely used in the CoNLL dataset. Note 7The best combination was α = 0.05, β = 0.1, γ = 0.55 δ = 0.15, ω = 0.15 8http://www.nist.gov/tac/2014/KBP/ Dataset Documents Mentions CoNLL testa 216 4791 CoNLL testb 231 4485 TAC2014 DEL test 138 2817 Table 1: Document and linkable mention counts for CoNLL and TAC2014 DEL datasets. that TAC2014 EDL included several evaluation measures, including the aforementioned microaccuracy of linkable mentions, but the official evaluation measure was Bcubed+ F1 score, involving also detection and clustering of mentions which refer to entities not in the target knowledge base. We decided to use the same evaluation measure for both datasets, for easier comparison. Table 1 summarizes the statistics of the datasets used in this paper where document and mention counts are presented. 5 Development experiments We started to check the contribution of the acquired background information in the testa section of the CoNLL dataset. In fact, we decided to focus first on a subset of testa about sports,9 and also acquired background information from the sports sub-collection of the Reuters corpus.10 The rationale was that we wanted to start in a controlled setting, and having assumed that the domain of the test documents and the source of the background information could play a role, we decided to start focusing on the sports domain first. Another motivation is that we noticed that the ambiguity between locations and sport clubs (e.g. football, cricket, rugby, etc.) is challenging, as shown in Figure 1. 5.1 Entity similarity with no context In our first controlled experiment, we wanted to test whether the entity similarity resource provided any added value for the cases where the target mentions had to be disambiguated out of context. Our hypothesis was that the background information from the unannotated Reuters collection, entity similarity in this case, should provide improved performance. We thus simulated a corpus where mentions have no context, extracting the named entity mentions in the sports subset that 9Including 102 out of the 216 documents in testa, totaling 3319 mentions. 10Including approx. 35K documents out of the 250K documents in Reuters 1907 Method m-acc P(e)P(s|e) 63.83 P(e)P(s|e)P(csim|e, s) 70.98 Table 2: Results on mentions with no context on the sports subset of testa, limited to 85% of the mentions (cf. Section 5.1). Method m-acc P(e)P(s|e) 63.66 P(e)P(s|e)P(c|e) 66.18 P(e)P(s|e)P(csp|e, s) 67.33 P(e)P(s|e)P(c|e)P(csp|e, s) 68.78 Table 3: Results on mentions with access to limited context on the sports subset of testa, limited to the 45% of mentions (cf. Section 5.2). had an entry in the entity similarity resource (cf. Section 2.1), totaling 85% of the 3319 mentions. Table 2 shows that the entity similarity resource improves the results of the model combining the entity prior and mention probability, similar to the so-called most frequent sense baseline (MFS). Note that the combination of both entity prior and mention probability is a hard-to-beat baseline, as we will see in Section 6. This experiment confirms that entity similarity information is useful when no context is present. 5.2 Selectional preferences with short context In our second controlled experiment, we wanted to test whether the selectional preferences provided any added value for the cases where the target mentions had limited context, that of the dependency template. Our hypothesis was that the background information from the unannotated Reuters collection, selectional preferences in this case, should provide improved performance with respect to the baseline generative model of context. We thus simulated a corpus where mentions have only short context, exactly the same as the dependency templates which apply to the example, constructed extracting the named entity mentions in the sports subset that contained matching templates in the selectional preference resource (cf. Section 2.2), totaling 45% of the 3319 mentions. Table 3 shows that the selectional preference resource (third row) allows to improve the results with respect to the no-context baseline (first row) and, more importantly, with respect to the baseMethod m-acc P(e)P(s|e)P(c|e) 69.54 P(e)P(s|e)P(c|e)P(csp|e, s) 71.25 P(e)P(s|e)P(c|e)P(csim|e, s) 72.64 Full 73.94 Table 4: Results on mentions with limited context on the sports subset of testa, limited to the 41% of the mentions (cf. Section 5.3) Models Spor. Reut. P(e)P(s|e) 65.52 65.52 P(e)P(s|e)P(c|e) 72.81 72.81 P(e)P(s|e)P(c|e)P(csp|e, s) 73.56 73.06 P(e)P(s|e)P(c|e)P(csim|e, s) 75.73 76.62 Full 76.30 76.87 Table 5: Results on the entire sports subset of testa: middle column uses the sports subset of Reuters to acquire background information, right column uses the full Reuters (cf. Section 5.4). line generative model (second row). The last row shows that the context model and the selectional preference model are complementary, as they produce the best result in the table. This experiment confirms that selectional preference information is effective when limited context is present. 5.3 Combinations In our third controlled experiments, we combine all three context and background models and evaluate them in the subset of the sports mentions that have entries in the similarity resource, and also contain matching templates in the selectional preference resource (41% of the sports subset). Note that, in this case, the context model has access to the entire context. Table 4 shows that, effectively, the background information adds up, with best results for the full combined model (cf. Section 3), confirming that both sources of background information are complementary to the baseline context model and between themselves. 5.4 Sports subsection of CoNLL testa The previous experiments have been run on a controlled setting, limited to the subset where our constructed resources could be applied. In this section we report results for the entire sports subset of CoNLL testa. The middle column in Table 5 shows the results for the two baselines, and the improvements when adding the two background 1908 models, separately, and in combination. The results show that the improvements reported in the controlled experiments carry over when evaluating to all mentions in the Sport subsection, with an accumulated improvement of 3.5 absolute points over the standard NED system (second row). The experiments so far have tried to factor out domain variation, and thus the results have been produced using the background information acquired from the sports subset of the Reuters collection. In order to check whether this control of the target domain is necessary, reproduced the same experiment using the full Reuters collection to build the background information, as reported in the rightmost column in Table 5. The results are very similar,11 with a small decrease for selectional preferences, a small increase for the similarity resource, and a small increase for the full system. In view of these results, we decided to use the full Reuters collection to acquire the background knowledge for the rest of the experiments, and did not perform further domain-related experiments. 5.5 Results on CoNLL testa Finally, Table 6 reports the results on the full development dataset. The results show that the good results in the sports subsection carry over to the full dataset. The table reports results for the baseline systems (two top rows) and the addition of the background models, including the Full model, which yields the best results. In addition, the two rows in the bottom report the results of the ensemble methods (cf. Section 3.1) which learn the weights on the same development dataset. These results are reported for completeness, as they are an over-estimation, and are over-fit. Note that all hyper-parameters have been tuned on this development dataset, including the ensemble weights, smoothing parameters λ and θ (cf. Section 3), as well as the number of similar entities and the number of fillers in the selectional preferences. The next section will show that the good results are confirmed in unseen test datasets. 6 Overall Results In the previous sections we have seen that the background information is effective improving the results on development. In this section we report 11The two first rows do not use background information, and are thus the same. System testa P(e)P(s|e) 73.76 P(e)P(s|e)P(c|e) 78.98 P(e)P(s|e)P(c|e)P(csp|e, s) 79.32 P(e)P(s|e)P(c|e)P(csim|e, s) 81.76 Full 81.90 P(e)αP(s|e)βP(c|e)γ 85.20 Full weighted 86.62 Table 6: Results on the full testa dataset (cf. Section 5.5). System CoNLL TAC14 P(e)P(s|e) 73.07 78.31 P(e)P(s|e)P(c|e) 79.98 82.11 P(e)P(s|e)P(c|e)P(csp|e, s) 81.31 82.61 P(e)P(s|e)P(c|e)P(csim|e, s) 82.72 83.24 Full 82.85 83.21 P(e)αP(s|e)βP(c|e)γ 86.44 81.61 Full weighted 88.32 83.46 Table 7: Overall micro accuracy results on the CoNLL testb and TAC 2014 DEL datasets. the result of our model in the popular CoNLL testb and TAC2014 DEL datasets, which allow to compare to the state-of-the-art in NED. Table 7 reports our results, confirming that both background information resources improve the results over the standard NED generative system, separately, and in combination, for both datasets (Full row). All differences with respect to the standard generative system are statistically significant according to the Wilcoxon test (p-value < 0.05). In addition, we checked the contribution of learning the ensemble weights on the development dataset (testa). Both the generative system with and without background information improve considerably. The error reduction between the weighted model using background information (Full weighted row) and the generative system without background information (previous row) exceeds 10% in both datasets, providing very strong results, and confirming that the improvement due to background information is consistent across both datasets, even when applied on a very strong system. The difference is statistically significant in both datasets. 1909 System CoNLL TAC14 Full weighted 88.32 83.46 (Barrena et al., 2015) 83.61 80.69 (Lazic et al., 2015) 86.40 — (Alhelbawy & Gaizauskas,14) *87.60 — (Chisholm and Hachey, 2015) 88.70 — (Pershina et al., 2015) *91.77 — TAC14 best (Ji et al., 2014) — 82.70 Table 8: Overall micro accuracy results on the CoNLL testb and TAC 2014 DEL datasets, including the current state-of-the-art. Starred results are not comparable, see text. 7 Related Work Our generative model is based on (Han and Sun, 2011b), which is basically the core method used in later work (Barrena et al., 2015; Lazic et al., 2015) with good results. Although the first do not report results on our datasets the other two do. (Barrena et al., 2015) combines the generative model with a graph-based system yielding strong results in both datasets. (Lazic et al., 2015) adds a parameter estimation method which improved the results using unannotated data. Our work is complementary to those, as we could also introduce additional disambiguation probabilities (Barrena et al., 2015), or apply more sophisticated parameter estimation methods (Lazic et al., 2015). Table 8 includes other high performing or wellknown systems, which usually use complex methods to combine features coming from different sources, where our results are only second to those of (Chisholm and Hachey, 2015) in the CoNLL dataset and best in TAC 2014 DEL. The goal of this paper is not to provide the best performing system, but yet, the results show that our use of background information allows to obtain very good results. Alhelbawy and Gaizauskas (2014) combines local and coherence features by means of a graph ranking scheme, obtaining very good results on the CONLL 2003 dataset. They evaluate on the full dataset, i.e. they test on train, testa and testb (20K, 4.8K and 4.4K mentions respectively). Our results on the same dataset are 84.25 (Full) and 88.07 (Full weighted), but note that we do tune the parameters on testa, so this might be slighly over-estimated. Our system does not use global coherence, and therefore their method is complementary to our NED system. In principle, our proposal for enriching context should improve the results of their system. Pershina et al. (2015) propose a system closely resembling (Alhelbawy and Gaizauskas, 2014). They report the best known results on CONNL 2003 so far, but unfortunately, their results are not directly comparable to the rest of the state-of-theart, as they artificially insert the gold standard entity in the candidate list.12 In (Chisholm and Hachey, 2015) the authors explore the use of links gathered from the web as an additional source of information for NED. They present a complex two-staged supervised system that incorporates global coherence features, with large amount of noisy training. Again, using additional training data seems an interesting future direction complementary to ours. We are not aware of other works which try to use additional sources of context or background information as we do. (Cheng and Roth, 2013) use relational information from Wikipedia to add constraints to the coherence model, and is somehow reminiscent of our use dependency templates, although they focus on recognizing a fixed set of relations between entities (as in information extraction) and do not model selectional preferences. (Barrena et al., 2014) explored the use of syntactic collocations to ensure coherence, but did not model any selectional preferences. Previous work on word sense disambiguation using selectional preference includes (McCarthy and Carroll, 2003) among others, but they report low results. (Brown et al., 2011) applied wordNet hypernyms for disambiguating verbs, but they did not test the improvement of this feature. (Taghipour and Ng, 2015) use embeddings as features which are fed into a supervised classifier, but our method is different, as we use embeddings to find similar words to be fed as additional context. None of the state-of-the-art systems, e.g. (Zhong and Ng, 2010), uses any model of selectional preferences. 8 Discussion We performed an analysis of the cases where our background models worsened the disambiguation performance. Both distributional similarity and selectional preferences rely on correct mention detection in the background corpus. We detected 12https://github.com/masha-p/PPRforNED/ readme.txt 1910 that mentions where missed, which caused some coverage issues. In addition, the small size of the background corpus sometimes produces arbitrary contexts. For instance, subject position fillers of “score” include mostly basketball players like Michael Jordan or Karl Malone. A similar issue was detected in the distributional similarity resource. A larger corpus would produce a broader range of entities, and thus use of larger background corpora (e.g. Gigaword) should alleviate those issues. Another issue was that some dependencies do not provide any focused context, as for instance arguments of say or tell. We think that a more sophisticated combination model should be able to detect which selectional preferences and similarity lists provide a focused set of instances. 9 Conclusions and Future Work In this article we introduced two novel kinds of background information induced from corpora to the usual context of occurrence in NED: (1) given a mention we used distributionally similar entities as additional context; (2) given a mention and the syntactic dependencies in the context sentence, we used the selectional preferences of those syntactic dependencies as additional context. We showed that similar entities are specially useful when no textual context is present, and that selectional preferences are useful when limited context is present. We integrated them in a Bayesian generative NED model which provides very strong results. In fact, when integrating all knowledge resources we yield the state-of-the-art in the TAC KBP DEL 2014 dataset and get the third best results in the CoNLL 2003 dataset. Both resources are freely available for reproducibility.13 The analysis of the acquired information and the error analysis show several avenues for future work. First larger corpora should allow to increase the applicability of the similarity resource, and specially, that of the dependency templates, and also provide better quality resources. Acknowledgments We thank the reviewers for their suggestions. This work was partially funded by MINECO (TUNER project, TIN2015-65308-C5-1-R). The IXA group 13http://ixa2.si.ehu.es/anderbarrena/ 2016ACL_files.zip is funded by the Basque Government (A type Research Group). Ander Barrena enjoys an PhD scholarship from the Basque Government. References Eneko Agirre, Ander Barrena, and Aitor Soroa. 2015. Studying the wikipedia hyperlink graph for relatedness and disambiguation. CoRR, abs/1503.01655. Ayman Alhelbawy and Robert Gaizauskas. 2014. Graph ranking for collective named entity disambiguation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 75–80, Baltimore, Maryland, June. Association for Computational Linguistics. Ander Barrena, Eneko Agirre, Bernardo Cabaleiro, Anselmo Pe˜nas, and Aitor Soroa. 2014. ”one entity per discourse” and ”one entity per collocation” improve named-entity disambiguation. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2260–2269, Dublin, Ireland, August. Dublin City University and Association for Computational Linguistics. Ander Barrena, Aitor Soroa, and Eneko Agirre. 2015. Combining mention context and hyperlinks from wikipedia for named entity disambiguation. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 101–105, Denver, Colorado, June. Association for Computational Linguistics. Bernd Bohnet. 2010. Very high accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 89–97, Stroudsburg, PA, USA. Association for Computational Linguistics. Susan Windisch Brown, Dmitriy Dligach, and Martha Palmer. 2011. Verbnet class assignment as a wsd task. In Proceedings of the Ninth International Conference on Computational Semantics, IWCS ’11, pages 85–94, Stroudsburg, PA, USA. Association for Computational Linguistics. R. C. Bunescu and M. Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Proceesings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 9–16, Trento, Italy. The Association for Computer Linguistics. X. Cheng and D. Roth. 2013. Relational inference for wikification. In EMNLP. Andrew Chisholm and Ben Hachey. 2015. Entity disambiguation with web links. Transactions of the Association for Computational Linguistics, 3:145–156. 1911 Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 216–223, Prague, Czech Republic, June. Association for Computational Linguistics. B. Hachey, W. Radford, J. Nothman, M. Honnibal, and J.R. Curran. 2012. Evaluating Entity Linking with Wikipedia. Artificial Intelligence, 194:130– 150, January. X. Han and L. Sun. 2011a. A generative entitymention model for linking entities with knowledge base. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 945–954, Stroudsburg, PA, USA. Association for Computational Linguistics. Xianpei Han and Le Sun. 2011b. A generative entitymention model for linking entities with knowledge base. In ACL HLT. J. Hoffart, M.A. Yosef, I. Bordino, H. F¨urstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, and G. Weikum. 2011. Robust Disambiguation of Named Entities in Text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 782–792, Stroudsburg, PA, USA. Association for Computational Linguistics. Heng Ji, Joel Nothman, and Ben Hachey. 2014. Overview of tac-kbp2014 entity discovery and linking tasks. In Proc. Text Analysis Conference (TAC2014). Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, and Fernando Pereira. 2015. Plato: A selective context model for entity resolution. Transactions of the Association for Computational Linguistics, 3:503–515. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361–397. Diana McCarthy and John Carroll. 2003. Disambiguating nouns, verbs, and adjectives using automatically acquired selectional preferences. Comput. Linguist., 29(4):639–654, December. Paul McNamee and Hoa Dang. 2009. Overview of the TAC 2009 Knowledge Base Population track. In TAC. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Maria Pershina, Yifan He, and Ralph Grishman. 2015. Personalized page rank for named entity disambiguation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 238–243, Denver, Colorado, May– June. Association for Computational Linguistics. Philip Resnik. 1996. Selectional constraints: An information-theoretic model and its computational realization. Cognition, 61:127–159, November. Kaveh Taghipour and Hwee Tou Ng. 2015. Semisupervised word sense disambiguation using word embeddings in general and specific domains. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 314–323, Denver, Colorado, May–June. Association for Computational Linguistics. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In ACL: System Demonstrations. 1912
2016
179
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 183–193, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Literal and Metaphorical Senses in Compositional Distributional Semantic Models E. Dar´ıo Guti´errez1 Ekaterina Shutova2 Tyler Marghetis3 Benjamin K. Bergen4 1 University of California San Diego 2 University of Cambridge 3 Indiana University Bloomington [email protected] [email protected] [email protected] [email protected] Abstract Metaphorical expressions are pervasive in natural language and pose a substantial challenge for computational semantics. The inherent compositionality of metaphor makes it an important test case for compositional distributional semantic models (CDSMs). This paper is the first to investigate whether metaphorical composition warrants a distinct treatment in the CDSM framework. We propose a method to learn metaphors as linear transformations in a vector space and find that, across a variety of semantic domains, explicitly modeling metaphor improves the resulting semantic representations. We then use these representations in a metaphor identification task, achieving a high performance of 0.82 in terms of F-score. 1 Introduction An extensive body of behavioral and corpuslinguistic studies suggests that metaphors are pervasive in everyday language (Cameron, 2003; Steen et al., 2010) and play an important role in how humans define and understand the world. According to Conceptual Metaphor Theory (CMT) (Lakoff and Johnson, 1981), individual metaphorical expressions, or linguistic metaphors (LMs), are instantiations of broader generalizations referred to as conceptual metaphors (CMs). For example, the phrases half-baked idea, food for thought, and spoon-fed information are LMs that instantiate the CM IDEAS ARE FOOD. These phrases reflect a mapping from the source domain of FOOD to the target domain of IDEAS (Lakoff, 1989). Two central claims of the CMT are that this mapping is systematic, in the sense that it consists of a fixed set of ontological correspondences, such as thinking is preparing, communication is feeding, understanding is digestion; and that this mapping can be productively extended to produce novel LMs that obey these correspondences. Recent years have seen the rise of statistical techniques for metaphor detection. Several of these techniques leverage distributional statistics and vector-space models of meaning to classify utterances as literal or metaphorical (Utsumi, 2006; Shutova et al., 2010; Hovy et al., 2013; Tsvetkov et al., 2014). An important insight of these studies is that metaphorical meaning is not merely a property of individual words, but rather arises through cross-domain composition. The meaning of sweet, for instance, is not intrinsically metaphorical. Yet this word may exhibit a range of metaphorical meanings—e.g., sweet dreams, sweet person, sweet victory–that are created through the interplay of source and target domains. If metaphor is compositional, how do we represent it, and how can we use it in a compositional framework for meaning? Compositional distributional semantic models (CDSMs) provide a compact model of compositionality that produces vector representations of phrases while avoiding the sparsity and storage issues associated with storing vectors for each phrase in a language explicitly. One of the most popular CDSM frameworks (Baroni and Zamparelli, 2010; Guevara, 2010; Coecke et al., 2010) represents nouns as vectors, adjectives as matrices that act on the noun vectors, and transitive verbs as third-order tensors that act on noun or noun phrase vectors. The meaning of a phrase is then derived by composing these lexical representations. The vast majority of such models build a single representation for all senses of a word, collapsing distinct senses together. One exception is the work of Kartsaklis and Sadrzadeh (2013a), who investigated homonymy, in which lexical items 183 have identical form but unrelated meanings (e.g., bank). They found that deriving verb tensors from all instances of a homonymous form (as compared to training a separate tensor for each distinct sense) loses information and degrades the resultant phrase vector representations. To the best of our knowledge, there has not yet been a study of regular polysemy (i.e. metaphorical or metonymic sense distinctions) in the context of compositional distributional semantics. Yet, due to systematicity in metaphorical cross-domain mappings, there are likely to be systematic contextual sense distinctions that can be captured by a CDSM, improving the resulting semantic representations. In this paper, we investigate whether metaphor, as a case of regular polysemy, warrants distinct treatment under a compositional distributional semantic framework. We propose a new approach to CDSMs, in which metaphorical meanings are distinct but structurally related to literal meanings. We then extend the generalizability of our approach by proposing a method to automatically learn metaphorical mappings as linear transformations in a CDSM. We focus on modeling adjective senses and evaluate our methods on a new data set of 8592 adjective-noun pairs annotated for metaphoricity, which we will make publicly available. Finally, we apply our models to classify unseen adjective-noun (AN) phrases as literal or metaphorical and obtain state-of-the-art performance in the metaphor identification task. 2 Background & Related Work Metaphors as Morphisms. The idea of metaphor as a systematic mapping has been formalized in the framework of category theory (Goguen, 1999; Kuhn and Frank, 1991). In category theory, morphisms are transformations from one object to another that preserve some essential structure of the original object. Category theory provides a general formalism for analyzing relationships as morphisms in a wide range of systems (see Spivak (2014)). Category theory has been used to formalize the CM hypothesis with applications to user interfaces, poetry, and information visualization (Kuhn and Frank, 1991; Goguen and Harrell, 2010; Goguen and Harrell, 2005). Although these formal treatments of metaphors as morphisms are rigorous and wellformalized, they have been applied at a relatively limited scale. This is because this work does not suggest a straightforward and data-driven way to quantify semantic domains or morphisms, but rather focuses on the transformations and relations between semantic domains and morphisms, assuming some appropriate quantification has already been established. In contrast, our methods can learn representations of source-target domain mappings from corpus data, and so are inherently more scalable. Compositional DSMs. Similar issues arose in modeling compositional semantics. Formal semantics has dealt with compositional meaning for decades, by using mathematical structures from abstract algebra, logic, and category theory (Montague, 1970; Partee, 1994; Lambek, 1999). However, formal semantics requires manual crafting of features. The central insight of CDSMs is to model the composition of words as algebraic operations on their vector representations, as provided by a conventional DSM (Mitchell and Lapata, 2008). Guevara (2010) and Baroni and Zamparelli (2010) were the first to treat adjectives and verbs differently from nouns. In their models, adjectives are represented by matrices that act on noun vectors. Adjective matrices can be learned using regression techniques. Other CDSMs have also been proposed and successfully applied to tasks such as sentiment analysis and paraphrase (Socher et al., 2011; Socher et al., 2012; Tsubaki et al., 2013; Turney, 2013). Handling Polysemy in CDSMs. Several researchers argue that terms with ambiguous senses can be handled by DSMs without any recourse to additional disambiguation steps, as long as contextual information is available (Boleda et al., 2012; Erk and Pad´o, 2010; Pantel and Lin, 2002; Sch¨utze, 1998; Tsubaki et al., 2013). Baroni et al. (2014) conjecture that CDSMs might largely avoid problems handling adjectives with multiple senses because the matrices for adjectives implicitly incorporate contextual information. However, they do draw a distinction between two ways in which the meaning of a term can vary. Continuous polysemy—the subtle and continuous variations in meaning resulting from the different contexts in which a word appears—is relatively tractable, in their opinion. This contrasts with discrete homonymy—the association of a single term with completely independent meanings (e.g., light house vs. light work). Baroni et al. concede that homonymy is more difficult to handle in 184 CDSMs. Unfortunately, they do not propose a definite way to determine whether any given variation in meaning is polysemy or homonymy, and offer no account of regular polysemy (i.e., metaphor and metonymy) or whether it would pose similar problems as homonymy for CDSMs. To handle the problematic case of homonymy, Kartsaklis and Sadrzadeh (2013b) adapt a clustering technique to disambiguate the senses of verbs, and then train separate tensors for each sense, using the previously mentioned CDSM framework of Coecke et al. (2010). They found that prior disambiguation resulted in semantic similarity measures that correlated more closely with human judgments. In principle, metaphor, as a type of regular polysemy, is different from the sort of semantic ambiguity described above. General ambiguity or vagueness in meaning (e.g. bright light vs bright color) is generally context-dependent in an unsystematic manner. In contrast, in regular polysemy meaning transfer happens in a systematic way (e.g. bright light vs. bright idea), which can be explicitly modeled within a CDSM. The above CDSMs provide no account of such systematic polysemy, which is the gap this paper aims to fill. Computational Work on Metaphor. There is now an extensive literature on statistical approaches to metaphor detection. The investigated methods include clustering (Birke and Sarkar, 2006; Shutova et al., 2010; Li and Sporleder, 2010); topic modeling (Bethard et al., 2009; Li et al., 2010; Heintz et al., 2013); topical structure and imageability analysis (Strzalkowski et al., 2013); semantic similarity graphs (Sporleder and Li, 2009), and feature-based classifiers (Gedigian et al., 2006; Li and Sporleder, 2009; Turney et al., 2011; Dunn, 2013a; Dunn, 2013b; Hovy et al., 2013; Mohler et al., 2013; Neuman et al., 2013; Tsvetkov et al., 2013; Tsvetkov et al., 2014). We refer readers to the survey by Shutova (2015) for a more thorough review. Most relevant to the present work are approaches that attempt to identify whether adjective-noun phrases are metaphorical or literal. Krishnakumaran and Zhu (2007) use AN co-occurrence counts and WordNet hyponym/hypernym relations for this task. If the noun and its hyponyms/hypernyms do not occur frequently with the given adjective, then the AN phrase is labeled as metaphorical. Krishnakumaran and Zhu’s system achieves a precision of 0.67. Turney et al. (2011) classify verb and adjective phrases based on their level of concreteness or abstractness in relation to the noun they appear with. They learn concreteness rankings for words automatically (starting from a set of examples) and then search for expressions where a concrete adjective or verb is used with an abstract noun (e.g., dark humor is tagged as a metaphor; dark hair is not). They measure performance on a set of 100 phrases involving one of five adjectives, attaining an average accuracy of 0.79. Tsvetkov et al. (2014) train a random-forest classifier using several features, including abstractness and imageability rankings, WordNet supersenses, and DSM vectors. They report an accuracy of 0.81 on the Turney et al. (2011) AN phrase set. They also introduce a new set of 200 AN phrases, on which they measure an F-score of 0.85. 3 Experimental Data Corpus. We trained our DSMs from a corpus of 4.58 billion tokens. Our corpus construction procedure is modeled on that of Baroni and Zamparelli (2010). The corpus consisted of a 2011 dump of English Wikipedia, the UKWaC (Baroni et al., 2009), the BNC (BNC Consortium, 2007), and the English Gigaword corpus (Graff et al., 2003). The corpus was tokenized, lemmatized, and POStagged using the NLTK toolkit (Bird and Loper, 2004) for Python. Metaphor Annotations. We created an annotated dataset of 8592 AN phrases (3991 literal, 4601 metaphorical). Our choice of adjectives was inspired by the test set of Tsvetkov et al. (2014), though our annotated dataset is considerably larger. We focused on 23 adjectives that can have both metaphorical and literal senses, and which function as source-domain words in relatively productive CMs: TEMPERATURE (cold, heated, icy, warm), LIGHT (bright, brilliant, dim), TEXTURE (rough, smooth, soft); SUBSTANCE (dense, heavy, solid), CLARITY (clean, clear, murky), TASTE (bitter, sour, sweet), STRENGTH (strong, weak), and DEPTH (deep, shallow). We extracted all AN phrases involving these adjectives that occur in our corpus at least 10 times. We filtered out all phrases that require wider context to establish their meaning or metaphoricity—e.g., bright side, weak point. The remaining phrases were annotated using a 185 procedure based on Shutova et al. (2010). Annotators were encouraged to rely on their own intuition of metaphor, but were provided with the following guidance: • For each phrase, establish the meaning of the adjective in the context of the phrase. • Try to imagine a more basic meaning of this adjective in other contexts. Basic meanings tend to be: more concrete; related to embodied actions/perceptions/sensations; more precise; historically older/more “original”. • If you can establish a basic meaning distinct from the meaning of the adjective in this context, it is likely to be used metaphorically. If requested, a randomly sampled sentence from the corpus that contained the phrase in question was also provided. The annotation was performed by one of the authors. The author’s annotations were compared against those of a university graduate native English-speaking volunteer who was not involved in the research, on a sample of 500 phrases. Interannotator reliability (Cohen, 1960; Fleiss et al., 1969) was κ = 0.80 (SE = .02). Our annotated data set is publicly available at http: //bit.ly/1TQ5czN 4 Representing Metaphorical Senses in a Compositional DSM In this section we test whether separate treatment of literal and metaphorical senses is justified in a CDSM framework. In that case, training adjective matrix representations on literal and metaphorical subsets separately may result in systematically improved phrase vector representations, despite each matrix making use of fewer training examples. 4.1 Method Our goal is to learn accurate vector representations for unseen adjective-noun (AN) phrases, where adjectives can take on metaphorical or literal senses. Our models build off the CDSM framework of Baroni and Zamparelli (2010), as extended by Li et al. (2014). Each adjective a is treated as a linear map from nouns to AN phrases: p = Aan, where p is a vector for the phrase, n is a vector for the noun, and Aa is a matrix for the adjective. Contextual Variation Model. The traditional representations do not account for the differences in meaning of an adjective in literal vs metaphorical phrases. Their assumption is that the contextual variations in meaning that are encoded by literal and metaphorical senses may be subtle enough that they can be handled by a single catchall matrix per adjective, ABOTH(a). In this model, every phrase i can be represented by pi = ABOTH(a)ni (1) regardless of whether a is used metaphorically or literally in i. This model has the advantage of simplicity and requires no information about whether an adjective is being used literally or metaphorically. In fact, to our knowledge, all previous literature has handled metaphor in this way. Discrete Polysemy Model Alternatively, the metaphorical and literal senses of an adjective may be distinct enough that averaging the two senses together in a single adjective matrix produces representations that are not well-suited for either metaphorical or literal phrases. Thus, the literal-metaphorical distinction could be problematic for CDSMs in the way that Baroni et al. (2014) suggested that homonyms are. Just as Kartsaklis and Sadrzadeh (2013a) solve this problem by representing each sense of a homonym by a different adjective matrix, we represent literal and metaphorical senses by different adjective matrices. Each literal phrase i is represented by pi = ALIT(a)ni, (2) where ALIT(a) is the literal matrix for adjective a. Likewise, a metaphorical phrase is represented by pi = AMET(a)ni, (3) where AMET(a) is the metaphorical matrix for a. Learning. Given a data set of noun and phrase vectors D(a) = {(ni, pi)}N i=1 for AN phrases involving adjective a extracted using a conventional DSM, our goal is to learn AD(a). This can be treated as an optimization problem, of learning an estimate ˆAD(a) that minimizes a specified loss function. In the case of the squared error loss, L(AD(a)) = P i∈D(a) ∥pi −AD(a)ni∥2 2, the optimal solution can be found precisely using ordinary least-squares regression. However, this may result in overfitting because of the large number of parameters relative to the number of samples (i.e., phrases). Regularization parameters λ = (λ1, λ2) can be introduced to keep ˆAD(a) small: 186 X i∈D(a) ∥pi −ˆAD(a)ni∥2 2 + R(λ; ˆAD(a)), where R(λ; ˆAD) = λ1∥ˆAD∥1 + λ2∥ˆAD∥2. This approach, known as elastic-net regression (Zou and Hastie, 2005), produces better adjective matrices than unregularized regression (Li et al., 2014). Note that the same procedure can be used to learn the adjective representations in both the Contextual Variation model and the Discrete Polysemy model by varying what phrases are included in the training set D(a). In the Contextual Variation model D(a) includes both metaphorical and literal phrases, while in the Discrete Polysemy model it includes only metaphorical phrases when learning ˆAMET(a) and testing on metaphorical phrases (and only literal phrases when learning ˆALIT(a) and testing on literal phrases). 4.2 Experimental Setup Extracting Noun & Phrase Vectors. Our approach for constructing term vector representations is similar to that of Dinu et al. (2013). We first selected the 10K most frequent nouns, adjectives, and verbs to serve as context terms. We then constructed a co-occurrence matrix that recorded term-context co-occurrence within a symmetric 5-word context window of the 50K most frequent POS-tagged terms in the corpus. We then used these co-occurrences to compute the positive pointwise mutual information (PPMI) between every pair of terms, and collected these into a termterm matrix. Next, we reduced the dimensionality of this matrix to 100 dimensions using singularvalue decomposition. Additionally, we computed “ground truth” distributional vectors for all the annotated AN phrases in our data set by treating the phrases as single terms and computing their PPMI with the 50K single-word terms, and then projecting them onto the same 100-dimensional basis. Training Adjective Matrices. For each adjective a that we are testing, we split the phrases involving that adjective into two subsets, the literal (LIT) subset and the metaphorical (MET) subset. We then split the subsets into 10 folds, so that we do not train and test any matrices on the same phrases. For each fold k, we train three adjective matrices: ˆAMET(a) using all phrases from the MET set not in fold k; ˆALIT(a) using all phrases from the LIT set not in fold k; and ˆABOTH(a) using all the phrases from either subset not in fold k. Within each fold, we use nested cross-validation as outFigure 1: Reduction in error from training on targeted subset (MET/LIT) rather than on all phrases. lined in Li et al. (2014) to determine the regularization parameters for each regression problem. 4.3 Evaluating Vector Representations Evaluation. Our goal is to produce a vector prediction of each phrase that will be close to its ground truth distributional vector. Phrase vectors directly extracted from the corpus by treating the phrase as a single term are the gold standard for predicting human judgment and producing paraphrases (Dinu et al., 2013), so we use these as our ground truth. The quality of the vector prediction for phrase i is measured using the cosine distance between the phrase’s ground truth vector pi and the vector prediction ˆpi: err(ˆpi) = 1 −cos(ˆpi, pi). We then analyze the benefit of training on a reduced subset by calculating a “subset improvement” (SI) score for the MET and LIT subsets of each adjective a. We define the SI for each subset D(a) ∈{LIT(a), MET(a)} as: SI(D(a)) = 1 − P i∈D(a) err(ˆAD(a)ni) P i∈D(a) err(ˆABOTH(a)ni) Positive values of SI thus indicate improved performance when trained on a reduced subset compared to the full set of phrases. For example SILIT(a) = 5% tells us that predicting the phrase vectors for LIT phrases of adjective a using the LIT matrix resulted in a 5% reduction in mean cosine error compared to predicting the phrase vectors using the BOTH matrix. Results. The results are summarized in Fig. 1. Each point indicates the SI for a single adjective and for a single subset. Adjectives are grouped by source domain along the y-axis. Overall, almost every item shows a subset improvement; and, for every source domain, the majority of adjectives show a subset improvement. 187 We analyzed per-adjective SI by fitting a linear mixed-effects model, with a fixed intercept, a fixed effect of test subset (MET vs. LIT), a random effect of source domain, and the maximal converging random effects structure (uncorrelated random intercepts and slopes) (Barr et al., 2013). Training on a targeted subset improved performance by 4.4% ± 0.009(SE) (p = .002). There was no evidence that this differed by test subset (i.e., metaphorical vs. literal senses, p = .35). The positive SI from training on a targeted subset suggests that metaphorical and literal uses of the same adjective are semantically distinct. 4.4 Metaphor Classification Method. The results of the previous section suggest a straightforward classification rule: classify unseen phrase i involving adjective a as metaphorical if cos(pi, ˆAMET(a)ni) < cos( ˆALIT(a)ni). Otherwise, we classify it as literal. Evaluation. We test this method on our data set of 8593 annotated AN phrases using 10-fold cross validation. It is possible that our method’s classification performance is not due to the compositional aspect of the model, but rather to some semantic coherence property among the nouns in the AN phrases that we are testing. To control for this possibility, we compare the performance of our method against four baselines. The first baseline, NOUN-NN, measures the cosine distance between the vector for the noun of the AN phrase being tested and the noun vectors of the nouns participating in an AN phrase in the training folds. The test phrase is then assigned the label of the AN phrase whose noun vector is nearest. PHRASENN proceeds similarly, but using the ground-truth phrase vectors for the test phrase and the training phrases. The test phrase is then assigned the label of the AN phrase whose vector is nearest. The baseline NOUN-CENT first computes the centroid of the noun vectors of the training phrases that are literal, and the centroid of the noun vectors of the training phrases that are metaphorical. It then assigns the test phrase the label of the centroid whose cosine distance from the test phrase’s noun vector is smallest. PHRASE-CENT, proceeds similarly, but using phrase vectors. We measure performance against the manual annotations. Results. Our classification method achieved a held-out F-score of 0.817, recall of 0.793, precision of 0.842, and accuracy of 0.809. These reMethod F-score Precision Recall Accuracy MET-LIT 0.817 0.842 0.793 0.809 NOUN-NN 0.709 0.748 0.675 0.703 PHRASE-NN 0.590 0.640 0.547 0.592 NOUN-CENT 0.717 0.741 0.695 0.706 PHRASE-CENT 0.629 0.574 0.695 0.559 Table 1: Performance of the method of §4.4 (METLIT) against various baselines. sults were superior to those of the baselines (Table 1). These results are competitive with the state of the art and demonstrate the importance of compositionality in metaphor identification. 5 Metaphors as Linear Transformations One of the principal claims of the CM hypothesis is that CMs are productive: A CM (i.e., mapping) can generate endless new LMs (i.e., linguistic expressions). Cases where the LMs involve an adjective that has already been used metaphorically and for which we have annotated metaphorical and literal examples can be handled by the methods of §4, but when the novel LM involves an adjective that has only been observed in literal usage, we need a more elaborate model. According to the CM hypothesis, an adjective’s metaphorical meaning is a result of the action of a sourceto-target CM mapping on the adjective’s literal sense. If so, then given an appropriate representation of this mapping it should be possible to infer the metaphorical sense of an adjective without ever seeing metaphorical exemplars—that is, using only the adjective’s literal sense. Our next experiments seek to determine whether it is possible to represent and learn CM mappings as linear maps in distributional vector space. 5.1 Model We model each CM mapping M from source to target domain as a linear transformation CM: AMET(a)ni ≈CMALIT(a)ni (4) We can apply a two-step regression to learn CM. First we apply elastic-net regression to learn the literal adjective matrix ˆALIT(a) as in §4.2. Then we can substitute this estimate into Eq. (4), and apply elastic-net regression to learn the ˆCM that minimizes the regularized squared error loss: X a∈M X i∈D(a) ∥pi −ˆCM ˆALIT(ai)ni∥2 2 + R(λ; ˆCM). 188 To learn CM in this regression problem, we can pool together and train on phrases from many different adjectives that participate in M. 5.2 Experimental Setup We used a cross-validation scheme where we treated each adjective in a source domain as a fold in training the domain’s metaphor transformation matrix. The nested cross-validation procedure we use to set regularization parameters λ and evaluate performance requires at least 3 adjectives in a source domain, so we evaluate on the 6 source domain classes containing at least 3 adjectives. The total number of phrases for these 19 adjectives is 6987 (3659 metaphorical, 3328 literal). 5.3 Evaluating Vector Representations Evaluation. We wish to test whether CM mappings learned from one set of adjectives are transferable to new adjectives for which metaphorical phrases are unseen. As in §4, models were evaluated using cosine error compared to the ground truth phrase vector representation. Since our goal is to improve the vector representation of metaphorical phrases given no metaphorical annotations, we measure performance on the MET phrase subset for each adjective. We compare the performance of the transformed LIT matrix CMALIT(a) against the performance of the original LIT matrix ALIT(a) by defining the metaphor transformation improvement (MTI) as: MTI(a) = 1 − P i∈MET err(CM ˆALIT(a)) P i∈MET err(ˆALIT(a)) . Results. Per-adjective MTI was analyzed with a linear mixed-effects model, with a fixed intercept, a random effect of source domain, and random intercepts. Transforming the LIT matrix using the CM mapping matrix improved performance by 11.5% ± 0.023(SE) (p < .001). On average, performance improved for 18 of 19 adjectives and for every source domain (p = .03, binomial test; Fig. 2). Thus, mapping structure is indeed shared across adjectives participating in the same CM. 5.4 Metaphor Classification Method. Once again our results suggest a procedure for metaphor classification. This procedure can classify phrases involving adjectives without seeing any metaphorical annotations. For any unseen phrase i involving an adjective ai, we classify the phrase as metaphorical Figure 2: Reduction in error from transforming LIT matrix using metaphorical mapping. Mean change was positive for every domain (large black), and for all but one adjective (small red). Method F-score Precision Recall Accuracy TRANS-LIT 0.793 0.716 0.819 0.804 MET-LIT 0.838 0.856 0820 0.833 NOUN-NN 0.692 0.732 0.655 0.693 PHRASE-NN 0.575 0.625 0.532 0.587 NOUN-CENT 0.703 0.722 0.685 0.696 PHRASE-CENT 0.610 0.552 0.681 0.542 Table 2: Performance of method of §5.4 (TRANSLIT) against method of §4.4 (MET-LIT) and various baselines. if cos(pi, ˆCM ˆALIT(ai)ni) < cos(pi, ˆALIT(ai)ni). Otherwise, we classify it as literal. We used the same procedure as in §4.2 to learn ˆALIT(ai). Results. Our method achieved an F-score of 0.793 on the classification of phrases involving unseen adjectives. On this same set of phrases, the method of §4.4 achieved an F-score of 0.838. Once again, the performance of our method was superior to the performance of the baselines (Table 2; the MET-LIT figures in Table 2 differ slightly from those in Table 1 because only 19 of 23 adjectives are tested). For comparison, we also include the classification performance using the MET-LIT method of §4.4. While MET-LIT slightly outperforms TRANS-LIT, the latter has the benefit of not needing annotations for metaphorical phrases for the test adjective. Hence, our approach is generalizable to cases where such annotations are unavailable with only slight performance reduction. 6 Discussion Overall, our results show that taking metaphor into account has the potential to improve CDSMs and expand their domain of applicability. The findings of §4 suggest that collapsing across metaphorical and literal uses may hurt accuracy of vector rep189 resentations in CDSMs. While the method in §4 depends on explicit annotations of metaphorical and literal senses, the method in §5 provides a way to generalize these representations to adjectives for which metaphorical training data is unavailable, by showing that metaphorical mappings are transferable across adjectives from the same source domain. Note that an accurate matrix representation of the literal sense of each adjective is still required in the experimental setup of §5. This particular choice of setup allowed a proof of concept of the hypothesis that metaphors function as cross-domain transformations, but in principle it would be desirable to learn transformations from a general BOTH matrix representation for any adjective in a source domain to its MET matrix representation. This would enable improved vector representations of metaphorical AN phrases without annotation for unseen adjectives. The success of our models on the metaphor classification tasks demonstrates that there is information about metaphoricity of a phrase inherent in the composition of the meanings of its components. Notably, our results show that this metaphorical compositionality can be captured from corpus-derived distributional statistics. We also noticed some trends at the level of individual phrases. In particular, classification performance and vector accuracy tended to be lower for metaphorical phrases whose nouns are distributionally similar to nouns that tend to participate in literal phrases (e.g., reception is similar to foyer and refreshment in our corpus; warm reception is metaphorical while warm foyer is literal). Another area where classification accuracy is low is in phrases with low corpus occurrence frequency. The ground truth vectors for these phrases exhibit high sample variance and sparsity. Many such phrases sound paradoxical (e.g., bitter sweetness). Our results could also inform debates within cognitive science. First, cognitive scientists debate whether words that are used both literally and figuratively (e.g., long road, long meeting) are best understood as having a single, abstract meaning that varies with context or two distinct but related meanings. For instance, some argue that domains like space, time, and number operate over a shared, generalized magnitude system, yet others maintain that our mental representation of time and number is distinct from our mental representation of space, yet inherited metaphorically from it (Winter et al., 2015). Our results suggest that figurative and literal senses involve quite different patterns of use. This is statistical evidence that adjectives that are used metaphorically have distinct related senses, not a single abstract sense. Second, the Conceptual Metaphor Theory account hypothesizes that LMs are an outgrowth of metaphorical thought, which is in turn an outgrowth of embodied experiences that conflate source and target domains—experience structures thought, and thought structures language (Lakoff, 1993). However, recent critics have argued for the opposite causal direction: Linguistic regularities may drive the mental mapping between source and target domains (Hutchinson and Louwerse, 2013; Casasanto, 2014; Hutchinson and Louwerse, 2014). Our results show that, at least for AN pairs, the semantic structure of a source domain and its mapping to a metaphorical target domain are available in the distributional statistics of language itself. There may be no need, therefore, to invoke embodied experience to explain the prevalence of metaphorical thought in adult language users. A lifetime of experience with literal and metaphorical language may suffice. 7 Conclusion We have shown that modeling metaphor explicitly within a CDSM can improve the resulting vector representations. According to our results, the systematicity of metaphor can be exploited to learn linear transformations that represent the action of metaphorical mappings across many different adjectives in the same semantic domain. Our classification results suggest that the compositional distributional semantics of a phrase can inform classification of the phrase for metaphoricity. Beyond improvements to the applications we presented, the principles underlying our methods also show potential for other tasks. For instance, the LIT and MET adjective matrices and the CM mapping matrix learned with our methods could be applied to improve automated paraphrasing of AN phrases. Our work is also directly extendable to other syntactic constructions. In the CDSM framework we apply, verbs would be represented as third-order tensors. Tractable and efficient methods for estimating these verb tensors are now available (Fried et al., 2015). It may also be possible to extend the coverage of our system by using automated word-sense disambiguation to bootstrap annotations and therefore construct LIT 190 and MET matrices in a minimally supervised fashion (Kartsaklis et al., 2013b). Finally, it would be interesting to investigate modeling metaphorical mappings as nonlinear mappings within the deep learning framework. Acknowledgments This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Ekaterina Shutova’s research is supported by the Leverhulme Trust Early Career Fellowship. References Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183–1193. Association for Computational Linguistics. M. Baroni, S. Bernardini, A. Ferraresi, and E. Zanchetta. 2009. The WaCky wide web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209–226. Marco Baroni, Raffaela Bernardi, and Roberto Zamparelli. 2014. Frege in space: A program of compositional distributional semantics. Linguistic Issues in Language Technology, 9. Dale J. Barr, Roger Levy, Christoph Scheepers, and Harry J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3):255–278. Steven Bethard, Vicky Tzuyin Lai, and James H. Martin. 2009. Topic model analysis of metaphor frequency for psycholinguistic stimuli. In Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pages 9–16. Association for Computational Linguistics. Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 1–4. Julia Birke and Anoop Sarkar. 2006. A clustering approach for nearly unsupervised recognition of nonliteral language. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 329–336. BNC Consortium. 2007. British National Corpus, Version 3 BNC XML edition. Gemma Boleda, Eva Maria Vecchi, Miquel Cornudella, and Louise McNally. 2012. First-order vs. higherorder modification in distributional semantics. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1223–1233. Association for Computational Linguistics. Lynne Cameron. 2003. Metaphor in Educational Discourse. A&C Black, London. Daniel Casasanto. 2014. Development of metaphorical thinking: The role of language. In Mike Borkent, Barbara Dancygier, and Jennifer Hinnell, editors, Language and the Creative Mind, pages 3–18. CSLI Publications, Stanford. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. In Linguistic Analysis (Lambek Festschrift), pages 345– 384. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. educational and psychosocial measurement. Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013. General estimation and evaluation of compositional distributional semantic models. In Proceedings of the ACL 2013 Workshop on Continuous Vector Space Models and their Compositionality (CVSC 2013), pages 50–58, East Stroudsburg, Pennsylvania. ACL. Jonathan Dunn. 2013a. Evaluating the premises and results of four metaphor identification systems. In Computational Linguistics and Intelligent Text Processing, pages 471–486. Springer. Jonathan Dunn. 2013b. What metaphor identification systems can tell us about metaphor-in-language. In Proceedings of the First Workshop on Metaphor in NLP, pages 1–10. Katrin Erk and Sebastian Pad´o. 2010. Exemplar-based models for word meaning in context. In Proceedings of the ACL 2010 Conference Short Papers, pages 92–97. Association for Computational Linguistics. Joseph L. Fleiss, Jacob Cohen, and B.S. Everitt. 1969. Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72(5):323. Daniel Fried, Tamara Polajnar, and Stephen Clark. 2015. Low-rank tensors for verbs in compositional distributional semantics. In Proceedings of the 53nd Annual Meeting of the Association for Computational Linguistics, Beijing. Matt Gedigian, John Bryant, Srini Narayanan, and Branimir Ciric. 2006. Catching metaphors. In Proceedings of the Third Workshop on Scalable Natural Language Understanding, pages 41–48, New York. Association for Computational Linguistics. 191 Joseph A. Goguen and D. Fox Harrell. 2005. 7 information visualisation and semiotic morphisms. Studies in Multidisciplinarity, 2:83–97. Joseph A. Goguen and D. Fox Harrell. 2010. Style: A computational and conceptual blending-based approach. In The Structure of Style, pages 291–316. Springer, New York. Joseph Goguen. 1999. An introduction to algebraic semiotics, with application to user interface design. In Computation for metaphors, analogy, and agents, pages 242–291. Springer. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English Gigaword. Linguistic Data Consortium, Philadelphia. Emiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics, pages 33–37. Association for Computational Linguistics. Ilana Heintz, Ryan Gabbard, Mahesh Srinivasan, David Barner, Donald S Black, Marjorie Freedman, and Ralph Weischedel. 2013. Automatic extraction of linguistic metaphor with lda topic modeling. In Proceedings of the First Workshop on Metaphor in NLP, pages 58–66. Dirk Hovy, Shashank Srivastava, Sujay Kumar Jauhar, Mrinmaya Sachan, Kartik Goyal, Huiying Li, Whitney Sanders, and Eduard Hovy. 2013. Identifying metaphorical word use with tree kernels. In Proceedings of the First Workshop on Metaphor in NLP, pages 52–57. Sterling Hutchinson and Max Louwerse. 2013. Language statistics and individual differences in processing primary metaphors. Cognitive Linguistics, 24(4):667–687. Sterling Hutchinson and Max M. Louwerse. 2014. Language statistics explain the spatial–numerical association of response codes. Psychonomic Bulletin & Review, 21(2):470–478. Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, et al. 2013a. Prior disambiguation of word tensors for constructing sentence vectors. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1590–1601. Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Stephen Pulman. 2013b. Separating disambiguation from composition in distributional semantics. In Proceedings of the 2013 Conference on Computational Natural Language Learning, pages 114–123. Saisuresh Krishnakumaran and Xiaojin Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proceedings of the Workshop on Computational approaches to Figurative Language, pages 13–20. Association for Computational Linguistics. Werner Kuhn and Andrew U Frank. 1991. A formalization of metaphors and image-schemas in user interfaces. In Cognitive and linguistic aspects of geographic space, pages 419–434. Springer. George Lakoff and Mark Johnson. 1981. Metaphors we live by. University of Chicago Press, Chicago. George Lakoff. 1989. Some empirical results about the nature of concepts. Mind & Language, 4(12):103–129. George Lakoff. 1993. The contemporary theory of metaphor. In Andrew Ortony, editor, Metaphor and Thought. Cambridge University Press, Cambridge. Joachim Lambek. 1999. Type grammar revisited. In Logical aspects of computational linguistics, pages 1–27. Springer, Berlin. Linlin Li and Caroline Sporleder. 2009. Classifier combination for contextual idiom detection without labelled data. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 315–323. Association for Computational Linguistics. Linlin Li and Caroline Sporleder. 2010. Using Gaussian mixture models to detect figurative language in context. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 297–300. Association for Computational Linguistics. Linlin Li, Benjamin Roth, and Caroline Sporleder. 2010. Topic models for word sense disambiguation and token-based idiom detection. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1138–1147. Association for Computational Linguistics. Jiming Li, Marco Baroni, and Georgiana Dinu. 2014. Improving the lexical function composition model with pathwise optimized elastic-net regression. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 434–442. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL-08: HLT, pages 236–244. Michael Mohler, David Bracewell, David Hinote, and Marc Tomlinson. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the First Workshop on Metaphor in NLP, pages 27–35. Richard Montague. 1970. English as a formal language. In B Visentini and et al, editors, Linguaggi nella Societ`a e nella Tecnica. Edizioni di Comunit´a, Milan. 192 Yair Neuman, Dan Assaf, Yohai Cohen, Mark Last, Shlomo Argamon, Newton Howard, and Ophir Frieder. 2013. Metaphor identification in large texts corpora. PLoS ONE, 8:e62343. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 613–619. ACM. Barbara H. Partee. 1994. Lexical semantics and compositionality. In Lila Gleitman and Mark Liberman, editors, Invitation to Cognitive Science 2nd Edition, Part I: Language. MIT Press, Cambridge, Mass., USA. Hinrich Sch¨utze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97– 123. Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1002–1010. Association for Computational Linguistics. Ekatrina Shutova. 2015. Design and evaluation of metaphor processing systems. volume Forthcoming. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161. Association for Computational Linguistics. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics. David I. Spivak. 2014. Category Theory for the Sciences. MIT Press, Cambridge, Mass., USA. Caroline Sporleder and Linlin Li. 2009. Unsupervised recognition of literal and non-literal use of idiomatic expressions. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 754–762. Association for Computational Linguistics. Gerard J. Steen, Aletta G. Dorst, J. Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A method for linguistic metaphor identification: From MIP to MIPVU, volume 14. John Benjamins Publishing, Amsterdam/Philadelphia. Tomek Strzalkowski, George A. Broadwell, Sarah Taylor, Laurie Feldman, Boris Yamrom, Samira Shaikh, Ting Liu, Kit Cho, Umit Boz, Ignacio Cases, and Kyle Elliot. 2013. Robust extraction of metaphors from novel data. In Proceedings of the First Workshop on Metaphor in NLP, pages 67–76, Atlanta, Georgia. Association for Computational Linguistics. Masashi Tsubaki, Kevin Duh, Masashi Shimbo, and Yuji Matsumoto. 2013. Modeling and learning semantic co-compositionality through prototype projections and neural networks. In The 2013 Conference on Empirical Methods in Natural Language Processing, pages 130–140. Yulia Tsvetkov, Elena Mukomel, and Anatole Gershman. 2013. Cross-lingual metaphor detection using common semantic features. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Peter D. Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the 2011 Conference on the Empirical Methods in Natural Language Processing, EMNLP ’11, pages 680–690, Stroudsburg, PA, USA. Association for Computational Linguistics. Peter D. Turney. 2013. Distributional semantics beyond words: supervised learning of analogy and paraphrase. Transactions of the Association for Computational Linguistics (TACL), 1:353–366. Akira Utsumi. 2006. Computational exploration of metaphor comprehension processes. In Proceedings of the 28th Annual Meeting of the Cognitive Science Society (CogSci2006), pages 2281–2286. Bodo Winter, Tyler Marghetis, and Teenie Matlock. 2015. Of magnitudes and metaphors: Explaining cognitive interactions between space, time, and number. Cortex, 64:209–224. Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320. 193
2016
18
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1913–1923, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Mining Paraphrasal Typed Templates from a Plain Text Corpus Or Biran Columbia University [email protected] Terra Blevins Columbia University [email protected] Kathleen McKeown Columbia University [email protected] Abstract Finding paraphrases in text is an important task with implications for generation, summarization and question answering, among other applications. Of particular interest to those applications is the specific formulation of the task where the paraphrases are templated, which provides an easy way to lexicalize one message in multiple ways by simply plugging in the relevant entities. Previous work has focused on mining paraphrases from parallel and comparable corpora, or mining very short sub-sentence synonyms and paraphrases. In this paper we present an approach which combines distributional and KB-driven methods to allow robust mining of sentence-level paraphrasal templates, utilizing a rich type system for the slots, from a plain text corpus. 1 Introduction One of the main difficulties in Natural Language Generation (NLG) is the surface realization of messages: transforming a message from its internal representation to a natural language phrase, sentence or larger structure expressing it. Often the simplest way to realize messages is though the use of templates. For example, any message about the birth year and place of any person can be expressed with the template “[Person] was born in [Place] in [Year]”. Templates have the advantage that the generation system does not have to deal with the internal syntax and coherence of each template, and can instead focus on document-level discourse coherence and on local coreference issues. On the other hand, templates have two major disadvantages. First, having a human manually compose a template for each possible message is costly, especially when a generation system is relatively openended or is expected to deal with many domains. In addition, a text generated using templates often lacks variation, which means the system’s output will be repetitive, unlike natural text produced by a human. In this paper, we are concerned with a task aimed at solving both problems: automatically mining paraphrasal templates, i.e. groups of templates which share the same slot types and which, if their slots are filled with the same entities, result in paraphrases. We introduce an unsupervised approach to paraphrasal template mining from the text of Wikipedia articles. Most previous work on paraphrase detection focuses either on a corpus of aligned paraphrase candidates or on such candidates extracted from a parallel or comparable corpus. In contrast, we are concerned with a very large dataset of templates extracted from a single corpus, where any two templates are potential paraphrases. Specifically, paraphrasal templates can be extracted from sentences which are not in fact paraphrases; for example, the sentences “The population of Missouri includes more than 1 million African Americans” and “Roughly 185,000 Japanese Americans reside in Hawaii” can produce the templated paraphrases “The population of [american state] includes more than [number] [ethnic group]” and “Roughly [number] [ethnic group] reside in [american state]”. Looking for paraphrases among templates, instead of among sentences, allows us to avoid using an aligned corpus. Our approach consists of three stages. First, we process the entire corpus and determine slot locations, transforming the sentences to templates (Section 4). Next, we find most approriate type for each slot using a large taxonomy, and group together templates which share the same set of types 1913 as potential paraphrases (Section 5). Finally, we cluster the templates in each group into sets of paraphrasal templates (Section 6). We apply our approach to six corpora representing diverse subject domains, and show through a crowd-sourced evaluation that we can achieve a high precision of over 80% with a reasonable similarity threshold setting. We also show that our threshold parameter directly controls the trade-off between the number of paraphrases found and the precision, which makes it easy to adjust our approach to the needs of various applications. 2 Related Work To our knowledge, although several works exist which utilize paraphrasal templates in some way, the task of extracting them has not been defined as such in the literature. The reason seems to be a difference in priorities. In the context of NLG, Angeli et al. (2010) as well as Kondadadi et al. (2013) used paraphrasal templates extracted from aligned corpora of text and data representations in specific domains, which were grouped by the data types they relate to. Duma and Klein (2013) extract templates from Wikipedia pages aligned with RDF information from DBPedia, and although they do not explicitly mention aligning multiple templates to the same set of RDF templates, the possibility seems to exist in their framework. In contrast, we are interested in extracting paraphrasal templates from non-aligned text for general NLG, as aligned corpora are difficult to obtain for most domains. While template extraction has been a relatively small part of NLG research, it is very prominent in the field of Information Extraction (IE), beginning with Hearst (1992). There, however, the goal is to extract good data and not to extract templates that are good for generation. Many pattern extraction (as it is more commonly referred to in IE) approaches focus on semantic patterns that are not coherent lexically or syntactically, and the idea of paraphrasal templates is not important (Chambers and Jurafsky, 2011). One exception which expicitly contains a paraphrase detection component is (Sekine, 2006). Meanwhile, independently of templates, detecting paraphrases is an important, difficult and wellresearched problem of Natural Language Processing. It has implications for the general study of semantics as well as many specific applications such as Question Answering and Summarization. Research that focuses on mining paraphrases from large text corpora is especially relevant for our work. Typically, these approaches utilize a parallel (Barzilay and McKeown, 2001; Ibrahim et al., 2003; Pang et al., 2003; Quirk et al., 2004; Fujita et al., 2012; Regneri and Wang, 2012) or comparable corpus (Shinyama et al., 2002; Barzilay and Lee, 2003; Sekine, 2005; Shen et al., 2006; Zhao et al., 2009; Wang and Callison-Burch, 2011), and there have been approaches that leverage bilingual aligned corpora as well (Bannard and CallisonBurch, 2005; Madnani et al., 2008). Of the above, two are particularly relevant. Barzilay and Lee (2003) produce slotted lattices that are in some ways similar to templates, and their work can be seen as the most closely related to ours. However, as they rely on a comparable corpus and produce untyped slots, it is not directly comparable. In our approach, it is precisely the fact that we use a rich type system that allows us to extract paraphrasal templates from sentences that are not, by themselves, paraphrases and avoid using a comparable corpus. Sekine (2005) produces typed phrase templates, but the approach does not allow learning non-trivial paraphrases (that is, paraphrases that do not share the exact same keywords) from sentences that do not share the same entities (thus remaining dependent on a comparable corpus), and the type system is not very rich. In addition, that approach is limited to learning short paraphrases of relations between two entities. Another line of research is based on contextual similarity (Lin and Pantel, 2001; Pas¸ca and Dienes, 2005; Bhagat and Ravichandran, 2008). Here, shorter (phrase-level) paraphrases are extracted from a single corpus when they appear in a similar lexical (and in later approaches, also syntactic) context. The main drawbacks of these methods are their inability to handle longer paraphrases and their tendency to find phrase pairs that are semantically related but not real paraphrases (e.g. antonyms or taxonomic siblings). More recent work on paraphrase detection has, for the most part, focused on classifying provided sentence pairs as paraphrases or not, using the Microsoft Paraphrase Corpus (Dolan et al., 2004). Mihalcea et al. (2006) evaluated a wide range of lexical and semantic measures of similarity and introduced a combined metric that outperformed all previous measures. Madnani et al. (2012) showed that metrics from Machine Translation can be used 1914 to find paraphrases with high accuracy. Another line of research uses the similarity of texts in a latent space created through matrix factorization (Guo and Diab, 2012; Ji and Eisenstein, 2013). Other approaches that have been explored are explicit alignment models (Das and Smith, 2009), distributional memory tensors (Baroni and Lenci, 2010) and syntax-aware representations of multiword phrases using word embeddings (Socher et al., 2011). Word embeddings were also used by Milajevs et al. (2014). These approaches are not comparable to ours because they focus on classification, as opposed to mining, of paraphrases. Detecting paraphrases is closely related to research on the mathematical representation of sentences and other short texts, which draws on a vast literature on semantics, including but not limited to lexical, distributional and knowledge-based semantics. Of particular interest to us is the work of Blacoe and Lapata (2012), which show that simple combination methods (e.g., vector multiplication) in classic vector space representations outperform more sophisticated alternatives which take into account syntax and which use deep representations (e.g. word embeddings, or the distributional memory approach). This finding is appealing since classic vector space representation (distributional vectors) are easy to obtain and are interpretable, making it possible to drill into errors. 3 Taxonomy Our method relies on a type system which links entities to one another in a taxonomy. We use a combination of WordNet (Fellbaum, 1998) and DBPedia (Auer et al., 2007), which provides both a rich top-level type system with lexicalizations of multiple senses and a large database of entities linked through the type system (the top-level DBPedia categories all have cognates in WordNet, which make the two easy to combine). Leveraging the fact that DBPedia entities have corresponding Wikipedia pages, we also use the redirect terms for those pages as alternative lexicalizations of the entity (e.g., the Wikipedia article “United States” has “USA” as a redirect term, among others). 4 Creating Templates The first step to creating the templates is to find entities, which are candidates to becoming slots in the templates. Since we are trying to find sentence-level paraphrasal templates, each sentence in the corpus is a potential template. Entities are found in multiple ways. First, we use regular expressions to find dates, percentages, currencies, counters (e.g., “9th”) and general numbers. Those special cases are immediately given their known type (e.g., “date” or “percentage”). Next, after POS-tagging the entire corpus, we look for candidate entities of the following kinds: terms that contain only NNP (including NNPS) tags; terms that begin and end with an NNP and contain only NNP, TO, IN and DT tags; and terms that contain only capitalized words, regardless of the POS tags. Of these candidates, we only keep ones that appear in the taxonomy. Unlike the special cases above, the type of the slots created from these general entities is not yet known and will be decided in the next step. At the end of this step, we have a set of partiallytyped templates: one made from each sentence in the corpus, with its slots (but not their types in most cases) defined by the location of entities. We remove from this set all templates which have less than two slots as these are not likely to be interesting, and all templates which have more than five slots to avoid excessively complicated templates. We originally experimented with simply accepting any term that appears in the taxonomy as an entity. That method, however, resulted in a large number of both errors and entities that were too general to be useful (e.g, “table”, “world” and similar terms are in the taxonomy). Note that NER approaches, even relatively fine-grained ones, would not give us the same richness of types that directly comparing to the taxonomy allows (and the next step requires that each entity we handle exist in the taxonomy, anyway). 5 Template Typing and Grouping Determining the type of a slot in the template presents two difficulties. First, there is a sense disambiuation problem, as many lexical terms have more than one sense (that is, they can correspond to more than one entry in the taxonomy). Second, even if the sense is known, it is not clear which level of the taxonomy the type should be chosen from. For example, consider the sentence “[JFK] is [New York]’s largest airport” (the terms in square brackets will become slots once their types are determined). “JFK” is ambiguous: it can be an airport, a president, a school, etc. The first 1915 step in this process is, then, to determine which of the possible senses of the term best fits the sentence. But once we determine that the sense of “JFK” here is of an airport, there are different types we can choose. JFK is a New York Airport, which is a type of Airport, which is a type of Air Field, which is a type of Facility and so on. The specificity of the type we choose will determine the correctness of the template, and also which other templates we can consider as potential paraphrases. Our solution is a two-stage distributional approach: choosing the sense, and then choosing the type level that best fit the context of the slot. In each stage, we construct a pseudo −sentence (a collection of words in arbitrary, non-grammatical order) from words used in the taxonomy to describe each option (a sense in the first stage, and a type level in the second stage), and then use their vector representations to find the option that best matches the context. Following the observation of Blacoe and Lapata (2012) that simple similarity metrics in traditional vector representations match and even outperform more sophisticated representations in finding relations among short texts as long as multiplication is used in forming vector representations for the texts, we use traditional context vectors as the basis of our comparisons in both stages. We collect context vectors from the entire English Wikipedia corpus, with a token window of 5. To avoid noise from rarely occuring words and reduce the size of the vectors, we remove any feature with a count below a threshold of log10(Σ) where Σ is the sum of all feature counts in the vector. Finally, the vector features are weighted with (normalized) TF*IDF.1 For a multi-word collection (e.g. a pseudosentence) ψ, we define the features of the combined vector Vψ using the vectors of member words Vw as: Vjψ = ( Y w∈ψ Vjw) 1 |S| (1) Where Vjw is the value of the jth feature of Vw. To choose the sense of the slot (the first stage), we start with S, the set of all possible senses (in the taxonomy) for the entity in the slot. We create a pseudo-sentence ψs from the primary lexi1A “term” being a single feature count, and a “document” being a vector calizations of all types in the hierarchy above each sense s - e.g., for the airport sense of JFK we create a single pseudo-sentence ψJFK−airport−sense consisting of the terms “New York airport”, “airport”, “air field”, “facility” and so on.2 We create a vector representation Vψs for each ψs using Equation 1. Then, we create a pseudo-sentence ψcontext for the context of the slot, composed of the words in a 5-word window to the left and right of the slot in the original sentence, and create the vector Vψcontext. We choose the sense ˆs with the highest cosine similarity to the contex: ˆs = arg max s∈S cos(Vψs, Vψcontext) Note that this is a deep similarity - the similarity of the (corpus) context of the sense and the (corpus) context of the slot context; the words in the sentence themselves are not used directly. We use the lexicalizations of all types in the hierarchy to achieve a more robust vector representation that has higher values for features that cooccur with many levels in the sense’s hierarchy. For example, we can imagine that “airplane” will co-occur with many of the types for the JFK airport sense, but “La Guardia” will not (helping to lower the score of the first, too-specific sense of “New York airport”) and neither will features that co-occur with other senses of a particular type e.g., “Apple” for the “airport” type.3 Once the sense is chosen, we choose the proper type level to use (the second stage). Here we create a pseudo-sentence for each type level separately, composed of all possible lexicalizations for the type. For example, the “air field” type contains the lexicalizations “air field”, “landing field”, and “flying field”. These pseudo-sentences are then compared to the context in the same way as above, and the one with highest similarity is chosen. The reason for using all lexicalizations is similar to the one for using all types when determining the sense: to create a more robust representation that down-scores arbitrary co-occurences. At the end of this step, the templates are fully typed. Before continuing to the next step of finding paraphrases, we group all potential paraphrases together. Potential paraphrases are simply 2But we exclude a fixed, small set of the most abstract types from the first few levels of the WordNet hierarchy, as these turn out to never be useful 3AirPort is the name of an Apple product 1916 groups of templates which share exactly the same set of slot types (regardless of ordering). 6 Finding Paraphrases within Groups Each group of potential paraphrases may contain multiple sub-groups such that each of the members of the subgroup is a paraphrase of all the others. In this last stage, we use a clustering algorithm to find these sub-groups. We define the distance between any two templates in a group as the Euclidean distance between the vectors (created using Equation 1) of the two templates with the entity slots removed (that is, the pseudo-sentences created with all words in the template outside of the slots). We tried other distance metrics as well (for example, averaging the distances between the contexts surrounding each pair of corresponding slots in both templates) but the Euclidean distance seemed to work best. Using this metric, we apply single-linkage agglomerative clustering, with the stopping criteria defined as a threshold τ for the maximum sum of squared errors (SSE) within any cluster. Specifically, the algorithm stops linking if the cluster C that would be created by the next link satisfies: log( C X v d(v, µC)2) ≥τ Where µC is the centroid of C and d is the Euclidean distance. The logarithm is added for convenience, since the SSE can get quite large and we want to keep τ on a smaller scale. The intuition behind this algorithm is that some paraphrases will be very similar (lexically or on a deeper level) and easy to find, while some will be more difficult to distinguish from template pairs that are related but not paraphrasal. The singlelinkage approach is essentially transductive, allowing the most obvious clusters to emerge first and avoiding the creation of a central model that will become less precise over time. The threshold is a direct mechanism for controlling the trade-off between precision and recall. At the end of this step, any pair of templates within the same cluster is considered a paraphrase. Clusters that contain only a single template are discarded (in groups that have high distances among their member templates, often the entire group is discarded since even a single link violates the threshold). 7 Evaluation To evaluate our method, we applied it to the six domains described in Table 1. We tried to choose a set of domains that are diverse in topic, size and degree of repeated structure across documents. For each domain, we collected a corpus composed of relevant Wikipedia articles (as described in the table) and used the method described in Sections 4-6 to extract paraphrasal templates. We used Wikipedia for convenience, since it allows us to easily select domain corpora, but there is nothing in our approach that is specific to Wikipedia; it can be applied to any text corpus. We sampled 400 pairs of paraphrases extracted from each domain and used this set of 2400 pairs to conduct a crowd-sourced human evaluation on CrowdFlower. For each template pair, we randomly selected one and used its original entities in both templates to create two sentences about the same set of entities. The annotators were presented with this pair and asked to score the extent to which they are paraphrases on a scale from 1 to 5. Table 2 shows the labels and a brief version of the explanations provided for each. To ensure the quality of annotations, we used a set of hidden test questions throughout the evaluation and rejected the contributions of annotators which did not get at least 70% of the test questions correctly. Of those that did perform well on the test questions, we had three annotators score each pair and used the average as the final score for the pair. In 39.4% of the cases, all three annotators agreed; two annotators agreed in another 47% of the cases, and in the remaining 13.6% there was complete disagreement. The inter-annotator agreement for the two annotators that had the highest overlap (27 annotated pairs), using Cohen’s Kappa, was κ = 0.35. The overall results are shown in Figure 1. Note that because of our clustering approach, we have a choice of similarity threshold. The results are shown across a range of thresholds from 8 to 11 - it is clear from the figure that the threshold provides a way to control the trade-off between the number of paraphrases generated and their precision. Table 3 shows the results with our preferred threshold of 9.5. The number of paraphrase clusters found changes with the threshold. For the 9.5 threshold we find 512 clusters over all domains, a little over 60% of the number of paraphrases. The distribution of their sizes is Zipfian: a few very large clus1917 Domain Description Size Source article link NBA NBA teams 30 National_Basketball_Association States US states 50 N/A AuMa Automobile manufacturers 241 List_of_automobile_manufacturers Metal Heavy Metal bands (original movement, 1967-1981) 291 List_of_heavy_metal_bands CWB Battles of the American Civil War 446 List_of_American_Civil_War_battles Marvel Superheroes from the Marvel Comics universe 932 Category:Marvel_Comics_superheroes Table 1: Evaluation domains. Source article links are preceded by https://en.wikipedia.org/wiki/ Score Label Explanation 5 Perfect Paraphrase The two sentences are equivalent in meaning (but allow differences in e.g. tense, wordiness or sentiment) 4 Almost Paraphrase The two sentences are equivalent in meaning with one minor difference (e.g., change or remove one word) 3 Somewhat Paraphrase The two sentences are equivalent in meaning with a few minor differences, or are complex sentences with a part that is a paraphrase and a part that is not 2 Related The sentences are related in meaning, but are not paraphrases 1 Unrelated The meanings of the sentences are unrelated Table 2: Annotation score labels and explanations Figure 1: The average scores for each domain, for a range of threshold choices. The number in parentheses for each threshold is the number of paraphrases generated Domain # paraphrases Avg. %3+ %4+ NBA 30 4.1 88% 70% States 171 4.1 86% 76% AuMa 58 3.5 80% 50% Metal 98 3.7 82% 63% CWB 81 3.6 75% 56% Marvel 428 3.7 83% 63% Table 3: Size, average score, % of pairs with a score above 3 (paraphrases), and % of pairs with a score above 4 (high quality paraphrases) for the different domains with a 9.5 threshold ters, dozens of increasingly smaller medium-sized ones and a long tail of clusters that contain only two templates. The vast majority of paraphrase pairs come from sentences that were not originally paraphrases (i.e, sentences that originally had different entities). With a 9.5 threshold, 86% of paraphrases answer that criteria. While that number varies somewhat across thresholds, it is always above 80% and does not consistently increase or decrease as the threshold increases. 1918 Corpus type Prec. PPS This paper, τ = 8 Unaligned 94% 0.005 This paper, τ = 9.5 Unaligned 82% 0.013 This paper, τ = 11 Unaligned 65% 0.1 Barzilay and McKeown (2001) Parallel 86.5% 0.1 * Ibrahim et al. (2003) Parallel 41.2% 0.11 * Pang et al. (2003) Parallel 81.5% 0.33 Barzilay and Lee (2003) Comparable 78.5% 0.07 Bannard and Callison-Burch (2005) Parallel bilingual 61.9% n/a ** Zhao et al. (2009) Parallel or Comparable 70.6% n/a ** Wang and Callison-Burch (2011) Comparable 67% 0.01 Fujita et al. (2012) Parallel bilingual + unaligned 58% 0.34 Regneri and Wang (2012) Parallel 79% 0.17 * These papers do not report the number of sentences in the corpus, but do report enough for us to estimate it (e.g. the number of documents or the size in MB) **These papers do not report the number of paraphrases extracted, or such a number does not exist in their approach Table 4: Comparison with the precision and paraphrases generated per input sentence (PPS) of relevant prior work While we wanted to show a meaningful comparison with another method from previous work, none of them do what we are doing here - extraction of sentence-size paraphrasal templates from a non-aligned corpus - and so a comparison using the same data would not be fair (and in most cases, not possible). While it seems that providing the results of human evaluation without comparison to prior methods is the norm in most relevant prior work (Ibrahim et al., 2003; Pas¸ca and Dienes, 2005; Bannard and Callison-Burch, 2005; Fujita et al., 2012), we wanted to at least get some sense of where we stand in comparison to other methods, and so we provide a list of (not directly comparable) results reported by other authors in Table 4.4 While it is impossible to meaningfully compare and rate such different methods, these numbers support the conclusion that our singlecorpus, domain-agnostic approach achieves a precision that is similar to or better than other methods. We also include the paraphrase per sentence (PPS) value - the ratio of paraphrases extracted to the number of input sentences of the corpus - for each method in the table. We intend this figure as the closest thing to recall that we can conceive 4We always show the results of the best system described. Where needed, if results were reported in a different way than simple percentages, we use averages and other appropriate measures. Some previous work defines related sentences (as opposed to paraphrases) as positives and some does not; we do not change their numbers to fit a single definition, but we use the harsher measure for our own results for mining paraphrases. However, keep in mind that it is not a comparable figure across the methods, since different corpora are used. In particular, it is expected to be significantly higher for parallel corpora, where the entire corpus consists of potential paraphrases (and that fact is reflected in Table 4, where some methods that use parallel corpora have a PPS that is an order of magnitude higher than other methods). 8 Discussion and Examples The first thing to note about the results shown in Figure 1 is that even for the highest threshold considered, which gives us a ×21 improvement in size over the smallest threshold considered, all domains except CWB achieve an average score higher than 3, meaning most of the pairs extracted are paraphrases (CWB is close - a little over 2.9 on average). For the lowest threshold considered, all domains are at a precision above 88%, and for three of them it is 100%. In general, across all domains, there seems to be a significant drop in precision (and a significant boost in size) for thresholds between 9 and 10, while the precisions and sizes are fairly stable for thresholds between 8 and 9 and between 10 and 11. This result is encouraging: since the method seems to behave fairly similarly for different domains with regard to changes in the threshold, we should be able to expect similar behavior for new domains as the threshold is 1919 adjusted. The magnitude of precision across domains is another matter. It is clear from the results that some domains are more difficult than others. The Metal domain seems to be the hardest: it never achieves an average score higher than 3.8. For the highest threshold, however, Metal is not different from most of the others, while CWB is significantly lower in precision. The reason seems to be the styles of the domain articles: some domains tend to have a more structured form. For example, each article in the States domain will discuss the economy, demographics, formation etc. of the state, and we are more likely to find paraphrases there (simply by virtue of there being 50×49 candidates). Articles in the Metal domain are much less structured, and there are fewer obvious paraphrase candidates. In CWB articles, there are a few repetitive themes: the outcome of the battle, the casualties, the generals involved etc., but beyond that it is fairly unstructured. This “structurality” of the domain also affects the number of paraphrases that can be found, as evident from the number of paraphrases found in the states domain in Table 3 as compared with the (much larger) Metal and CWB domains. Table 5 shows a number of examples from each domain, along with the score given to each by the annotators. In an informal error analysis, we saw a few scenarios recurring in low-scored pairs. The Metal example at the bottom of Table 5 is a double case of bad sense disambiguation: the album in the second sentence (“Pyromania” in the original) happened to have a name that is also a pathological state. In addition, the number in the second sentence really was a date (“1980”). If we had correctly assigned the senses, these two templates would not be paraphrase candidates. The process of grouping by type is an important part of improving precision: two sentences can be misleadingly similar in the vector space, but it is less likely to have two sentences with the exact same entity types and a high vector similarity that are not close in meaning. Another scenario is the one seen in the NBA example that was scored as 1. Here the senses were chosen correctly, but the level of the hierarchy chosen for the person slot was too high. If instead we had chosen basketball coach and basketball player for the two sentences respectively, they would not be considered as paraphrase candidates (and note that both meanings are implied by the templates). This sort of error does not create a problem (in our evaluation, at least) if the more accurate sense is the same in both sentences - for example, in the other NBA example (which scored 4), the place slot could be more accurately replaced with sports arena in both templates. Cases where the types are chosen correctly do not always result in perfect paraphrases, but are typically at least related (e.g. in the examples that scored 2, and to a lesser extent those that scored 3). That scenario can be controlled using a lower threshold, with the downside that the number of paraphrases found decreases. 9 Conclusion and Future Work We presented a method for extracting paraphrasal templates from a plain text corpus in three steps: templatizing the sentences of the corpus; finding the most appropriate type for each slot; and clustering groups of templates that share the same set of types into paraphrasal sub-groups. We conducted a crowd-sourced human evaluation and showed that our method performs similarly to or better than prior work on mining paraphrases, with three major improvements. First, we do not rely on a parallel or comparable corpus, which are not as easily obtained; second, we produce typed templates that utilize a rich, fine-grained type system, which can make them more suitable for generation; and third, by using such a type system we are able to find paraphrases from sentence pairs that are not, before templatization, really paraphrases. Many, if not most, of the worst misidentifications seem to be the result of errors in the second stage of the approach - disambiguating the sense and specificity of the slot types. In this paper we focused on a traditional distributional approach that has the advantage of being explainable, but it would be interesting and useful to explore other options such as word embeddings, matrix factorization and semantic similarity metrics. We leave these to future work. Another task for future work is semantic alignment. Our approach discovers paraphrasal templates without aligning them to a semantic meaning representation; while these are perfectly usable by summarization, question answering, and other text-to-text generation applications, it would be useful for concept-to-text generation and other applications to have each cluster of templates aligned 1920 Score Domain Templates 5 States Per dollar of federal tax collected in [date 1], [american state 1] citizens received approximately [money 1] in the way of federal spending. In [date) 1] the federal government spent [money 1] on [american state 1] for every dollar of tax revenue collected from the state. AuMa Designed as a competitor to the [car 1], [car 2] and [car 3]. It is expected to rival the [car 1], [car 2], and [car 3]. 4 CWB Federal casualties were heavy with at least [number 1] killed or mortally wounded, [number 2] wounded , and [number 3] made prisoner. Federal losses were [number 1] killed, [number 2] wounded, and [number 3] unaccounted for – primarily prisoners. NBA For the [date 1] season, the [basketball team 1] moved into their new arena , the [place 1], with a seating capacity of [number 1]. As a result of their success on the court, the [basketball team 1] moved into the [place 1] in [date 1], which seats over [number 1] fans. 3 Marvel [imaginary being 1] approached [imaginary being 2], hunting for leads about the whereabouts of the X-Men. [imaginary being 1] and [imaginary being 2] eventually found the X-Men and became full time members. Metal In [date 1], [band) 1] recorded their third studio album, “[album 1]”, which was produced by Kornelije Kovaˇc. [band 1] released their next full-length studio album, “[album 1]” in [date 1]. 2 Auma [company 1] and its subsidiaries created a variety of initiatives in the social sphere, initially in [country 1] and then internationally as the company expanded. [company 1] participated in [country 1]’s unprecedented economic growth of the 1950s and 1960s. Marvel Using her powers of psychological deduction, she picked up on [first name 1]’s attraction towards her, and then [first name 2] admits she is attracted to him as well. While [first name 1] became shy, reserved and bookish, [first name 2] became athletically inclined, aggressive, and arrogant. 1 NBA Though the [date 1] 76ers exceeded many on-court expectations, there was a great deal of behind-thescenes tension between [person 1], his players, and the front office. After an [date 1] start, with [person 1] already hurt, these critics seemed to have been proven right. Metal Within [number 1] hours of the statement, he died of bronchial pneumonia, which was brought on as a complication of [pathological state 1]. With the album’s massive success, “[pathological state 1]” was the catalyst for the [number 1] pop-metal movement. Table 5: Examples of template pairs and their scores to a semantic representation of the meaning expressed. Since we already discover all the entity types involved, all that is missing is the proposition (or frame, or set of propositions); this seems to be a straightforward, though not necessarily easy, task to tackle in the near future. References Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 502–512, Stroudsburg, PA, USA. Association for Computational Linguistics. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: a nucleus for a web of open data. In Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference, ISWC’07/ASWC’07, pages 722– 735, Berlin, Heidelberg. Springer-Verlag. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 597– 604, Stroudsburg, PA, USA. Association for Computational Linguistics. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpusbased semantics. Comput. Linguist., 36(4):673–721, December. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 16–23, Stroudsburg, PA, USA. Association for Computational Linguistics. Regina Barzilay and Kathleen R. McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL ’01, pages 50–57, Stroudsburg, PA, USA. Association for Computational Linguistics. 1921 Rahul Bhagat and Deepak Ravichandran. 2008. Large Scale Acquisition of Paraphrases for Learning Surface Patterns. In Proceedings of ACL-08: HLT, pages 674–682, Columbus, Ohio, June. Association for Computational Linguistics. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 546– 556, Stroudsburg, PA, USA. Association for Computational Linguistics. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 976–986, Stroudsburg, PA, USA. Association for Computational Linguistics. Dipanjan Das and Noah A. Smith. 2009. Paraphrase identification as probabilistic quasi-synchronous recognition. In Proc. of ACL-IJCNLP. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of Coling 2004, pages 350–356, Geneva, Switzerland, Aug 23–Aug 27. COLING. Daniel Duma and Ewan Klein. 2013. Generating natural language from linked data: Unsupervised template extraction. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) – Long Papers, pages 83–94. ASSOC COMPUTATIONAL LINGUISTICS-ACL. Christiane Fellbaum, editor. 1998. WordNet An Electronic Lexical Database. The MIT Press. Atsushi Fujita, Pierre Isabelle, and Roland Kuhn. 2012. Enlarging paraphrase collections through generalization and instantiation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 631–642, Stroudsburg, PA, USA. Association for Computational Linguistics. Weiwei Guo and Mona Diab. 2012. Modeling sentences in the latent space. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 864–872, Stroudsburg, PA, USA. Association for Computational Linguistics. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguisticsVolume 2, pages 539–545. Association for Computational Linguistics. Ali Ibrahim, Boris Katz, and Jimmy Lin. 2003. Extracting structural paraphrases from aligned monolingual corpora. In Proceedings of the Second International Workshop on Paraphrasing - Volume 16, PARAPHRASE ’03, pages 57–64, Stroudsburg, PA, USA. Association for Computational Linguistics. Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative improvements to distributional sentence similarity. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 891–896. ACL. Ravi Kondadadi, Blake Howald, and Frank Schilder. 2013. A statistical nlg framework for aggregated planning and realization. In ACL (1), pages 1406– 1415. The Association for Computer Linguistics. Dekang Lin and Patrick Pantel. 2001. Dirt @sbt@discovery of inference rules from text. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’01, pages 323–328, New York, NY, USA. ACM. N. Madnani, Philip Resnik, Bonnie J Dorr, and R. Schwartz. 2008. Applying automatically generated semantic knowledge: A case study in machine translation. NSF Symposium on Semantic Knowledge Discovery, Organization and Use. Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 182–190, Stroudsburg, PA, USA. Association for Computational Linguistics. Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the 21st National Conference on Artificial Intelligence - Volume 1, AAAI’06, pages 775–780. AAAI Press. D. Milajevs, D. Kartsaklis, M. Sadrzadeh, and M. Purver. 2014. Evaluating neural word representations in tensor-based compositional settings. In Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar. Association for Computational Linguistics, Association for Computational Linguistics. Marius Pas¸ca and P´eter Dienes. 2005. Aligning needles in a haystack: Paraphrase acquisition across the web. In Proceedings of the Second International Joint Conference on Natural Language Processing, IJCNLP’05, pages 119–130, Berlin, Heidelberg. Springer-Verlag. 1922 Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 102– 109, Stroudsburg, PA, USA. Association for Computational Linguistics. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 142–149. Michaela Regneri and Rui Wang. 2012. Using discourse information for paraphrase extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 916–927, Stroudsburg, PA, USA. Association for Computational Linguistics. Satoshi Sekine. 2005. Automatic paraphrase discovery based on context and keywords between ne pairs. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), pages 80–87. Satoshi Sekine. 2006. On-demand information extraction. In Proceedings of the COLING/ACL on Main Conference Poster Sessions, COLING-ACL ’06, pages 731–738, Stroudsburg, PA, USA. Association for Computational Linguistics. Siwei Shen, Dragomir R. Radev, Agam Patel, and G¨unes¸ Erkan. 2006. Adding syntax to dynamic programming for aligning comparable texts for the generation of paraphrases. In Proceedings of the COLING/ACL on Main Conference Poster Sessions, COLING-ACL ’06, pages 747–754, Stroudsburg, PA, USA. Association for Computational Linguistics. Yusuke Shinyama, Satoshi Sekine, and Kiyoshi Sudo. 2002. Automatic paraphrase acquisition from news articles. In Proceedings of the Second International Conference on Human Language Technology Research, HLT ’02, pages 313–318, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 151–161, Stroudsburg, PA, USA. Association for Computational Linguistics. Rui Wang and Chris Callison-Burch. 2011. Paraphrase fragment extraction from monolingual comparable corpora. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, BUCC ’11, pages 52– 60, Stroudsburg, PA, USA. Association for Computational Linguistics. Shiqi Zhao, Xiang Lan, Ting Liu, and Sheng Li. 2009. Application-driven statistical paraphrase generation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2, ACL ’09, pages 834–842, Stroudsburg, PA, USA. Association for Computational Linguistics. 1923
2016
180
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1924–1934, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics How to Train Dependency Parsers with Inexact Search for Joint Sentence Boundary Detection and Parsing of Entire Documents Anders Bj¨orkelund and Agnieszka Fale´nska and Wolfgang Seeker and Jonas Kuhn Institut f¨ur Maschinelle Sprachverarbeitung University of Stuttgart {anders,falensaa,seeker,jonas}@ims.uni-stuttgart.de Abstract We cast sentence boundary detection and syntactic parsing as a joint problem, so an entire text document forms a training instance for transition-based dependency parsing. When trained with an early update or max-violation strategy for inexact search, we observe that only a tiny part of these very long training instances is ever exploited. We demonstrate this effect by extending the ArcStandard transition system with swap for the joint prediction task. When we use an alternative update strategy, our models are considerably better on both tasks and train in substantially less time compared to models trained with early update/max-violation. A comparison between a standard pipeline and our joint model furthermore empirically shows the usefulness of syntactic information on the task of sentence boundary detection. 1 Introduction Although punctuation mostly provides reliable cues for segmenting longer texts into sentence units, human readers are able to exploit their understanding of the syntactic and semantic structure to (re-)segment input in the absence of such cues. When working with carefully copy-edited text documents, sentence boundary detection can be viewed as a minor preprocessing task in Natural Language Processing, solvable with very high accuracy. However, when dealing with the output of automatic speech recognition or “noisier” texts such as blogs and emails, non-trivial sentence segmentation issues do occur. Dridan and Oepen (2013), for example, show how much impact fully automatic preprocessing can have on parsing quality for well-edited and less-edited text. Two possible strategies to approach this problem are (i) to exploit other cues for sentence boundaries, such as prosodic phrasing and intonation in speech (e.g., Kol´aˇr et al. (2006)) or formatting cues in text documents (Read et al., 2012), and (ii) to emulate the human ability to exploit syntactic competence for segmentation. We focus here on the latter, which has received little attention, and propose to cast sentence boundary detection and syntactic (dependency) parsing as a joint problem, such that segmentations that would give rise to suboptimal syntactic structures can be discarded early on. A joint model for parsing and sentence boundary detection by definition operates on documents rather than single sentences, as is the standard case for parsing. The task is illustrated in Figure 1, which shows the beginning of a document in the Switchboard corpus, a collection of transcribed telephone dialogues. The parser must predict the syntactic structure of the three sentences as well as the start points of each sentence.1 The simple fact that documents are considerably longer than sentences, often by orders of magnitude, creates some interesting challenges for a joint system. First of all, the decoder needs to handle long inputs efficiently. This problem is easily solved by using transition-based decoders, which excel in this kind of setting due to their incremental approach and their low theoretical complexity. Specifically, we use a transition-based decoder that extends the Swap transition system of Nivre (2009) in order to introduce sentence boundaries during the parsing process. The parser performs inexact search for the optimal structure by 1We chose this example for its brevity. For this particular example, the task of sentence boundary prediction could be solved easily with speaker information since the second sentence is from another speaker’s turn. The interesting cases involve sentence segmentation within syntactically complex turns of a single speaker. 1924 you said you have four cats i have four cats how old are they ... nsubj nsubj ccomp num dobj nsubj num dobj advmod dep nsubj Figure 1: The beginning of a sample document from the Switchboard corpus. Tokens that start a sentence are underlined. The task is to predict syntactic structure and sentence boundaries jointly. maintaining a beam of several candidate derivations throughout the parsing process. We will show in this paper that, besides efficient decoding, a second, equally significant challenge lies in the way such a parser is trained. Normally, beam-search transition-based parsers are trained with structured perceptrons using either early update (Zhang and Clark, 2008; Collins and Roark, 2004) or max-violation updates (Huang et al., 2012). Yet our analysis demonstrates that neither of these update strategies is appropriate for training on very long input sequences as they discard a large portion of the training data.2 A significant part of the training data is therefore never used to train the model. As a remedy to this problem, we instead use an adaptation of the update strategy in Bj¨orkelund and Kuhn (2014). They apply early update in a coreference resolution system and observe that the task is inherently so difficult that the correct item practically never stays in the beam. So early updates are unable to exploit the full instances during training. They propose to apply the updates iteratively on the same document until the full document has been observed. In our case, i.e. when parsing entire documents, the problem is similar in that early updates do not reach the point where the learning algorithm exploits the full training data within reasonable time. Training instead with the iterative update strategy gives us significantly better models in substantially less training time. The second contribution in this paper is to demonstrate empirically that syntactic information can make up to a large extent for missing or unreliable cues from punctuation. The joint system implements this hypothesis and allows us to test the influence of syntactic information on the pre2We make one simplifying assumption in our experimental setup by assuming gold tokenization. Tokenization is often taken for granted, mostly because it is a fairly easy task in English. For a realistic setting, tokenization would have to be predicted as well, but since we are interested in the effect of long sequences on training, we do not complicate our setting by including tokenization. diction of sentence boundaries as compared to a pipeline baseline where both tasks are performed independently of each other. For our analysis, we use the Wall Street Journal as the standard benchmark set and as a representative for copyedited text. We also use the Switchboard corpus of transcribed dialogues as a representative for data where punctuation cannot give clues to a sentence boundary predictor (other types of data that may show this property to varying degrees are web content data, e.g. forum posts or chat protocols, or (especially historical) manuscripts). While the Switchboard corpus gives us a realistic scenario for a setting with unreliable punctuation, the syntactic complexity of telephone conversations is rather low compared to the Wall Street Journal. Therefore, as a controlled experiment for assessing how far syntactic competence alone can take us if we stop trusting punctuation and capitalization entirely, we perform joint sentence boundary detection/parsing on a lower-cased, nopunctuation version of the Wall Street Journal. In this setting, where the parser must rely on syntactic information alone to predict sentence boundaries, syntactic information makes a difference of 10 percentage point absolute for the sentence boundary detection task, and two points for labeled parsing accuracy. 2 Transition system We start from the ArcStandard system extended with a swap transition to handle non-projective arcs (Nivre, 2009). We add a transition SB (for sentence boundary) that flags the front of the buffer as the beginning of a new sentence. SB blocks the SHIFT transition until the stack has been reduced and a tree has been constructed, which prevents the system from introducing arcs between separate sentences. To track the predicted sentence boundaries, we augment the configurations with a set S to hold the predicted sentence boundaries. Conceptually this leads to a representation where a document has a single artificial root 1925 Transition Preconditions LEFTARC (σ|s1|s0, β, A, S) ⇒ (σ|s0, β, A ∪{s0 →s1}, S) s1 ̸= 0 RIGHTARC (σ|s1|s0, β, A, S) ⇒ (σ|s1, β, A ∪{s1 →s0}, S) SHIFT (σ, b0|β, A, S) ⇒ (σ|b0, β, A, S) b0 ̸= LAST(S) ∨|σ| = 1 ∨SWAPPED(β) SWAP (σ|s1|s0, β, A, S) ⇒ (σ|s0, s1|β, A, S) s1 < s0 SB (σ, b0|β, A, S) ⇒ (σ, b0|β, A, S ∪{b0}) LAST(S) < b0 ∧¬SWAPPED(β) Figure 2: Transition system. σ|s1|s0 denotes the stack with s0 and s1 on top, b0|β denotes the buffer with b0 in front. LAST(S) denotes the most recent sentence boundary, and SWAPPED(β) is true iff the buffer contains swapped items. node that replaces the artificial root nodes for individual sentences. The transition types of the system are shown in Figure 2. The configurations consist of four data structures: the stack σ, the input buffer β, the set of constructed arcs A, and the set of sentence boundaries S. LEFTARC, RIGHTARC, and SWAP have the same semantics and preconditions as in Nivre (2009). We modify the preconditions of SHIFT in order to block shifts when necessary. Whether shift is allowed can be categorized into three cases subject to the most recently predicted sentence boundary: • If LAST(S) < b0: The last predicted sentence boundary has already been shifted onto the stack. At this point, the system is building a new sentence and has not yet decided where it ends. SHIFT is therefore allowed. • If LAST(S) > b0: This situation can only occur if the system predicted a sentence boundary and subsequently made a SWAP. LAST(S) then denotes the end of the current sentence and is deeper in the buffer than b0. Thus SHIFT is allowed since b0 belongs to the current sentence. • If LAST(S) = b0: The system must complete the current sentence by reducing the stack before it can continue shifting and SHIFT is generally not allowed, with two exceptions. (1) If the stack consists only of the root (i.e., |σ| = 1), the current sentence has been completed and the system is ready to begin parsing the next one. (2) If b0 denotes the beginning of a new sentence, but it has been swapped back onto the buffer, then it belongs to the same sentence as the tokens currently on the stack. The preconditions for SB are straightforward. It is only allowed if the current b0 is ahead of the most recently predicted sentence boundary. Additionally, the transition is not allowed if b0 has been swapped out from the stack. If it were, then b0 would be part of the following sentence and sentences would no longer be continuous. Extending the transition system to also handle sentence boundaries does not affect the computational complexity. While the swap transition system has worst case O(n2) complexity, Nivre (2009) shows that swaps are rare enough that the system maintains a linear time complexity on average. A naive implementation of the configurations that make the arc set A and the sentence boundary set S explicit could result in configurations that require linear time for copying during beam search. Goldberg et al. (2013) show how this problem can be circumvented in the case of a sentence-based parser. Instead of making the arc set explicit, the arcs are reconstructed after parsing by following back-pointers to previous states. Only a small set of arcs required for feature extraction are saved in the states. We note that the same trick can be applied to avoid keeping an explicit representation of S since the system only needs to know the last predicted sentence boundary. Snt 1 sh sh la sh sh la sh sh sbearly la ra ra ra sblate Snt 2 sh sh la sh sh sbearly la ra ra sblate Snt 3 sh sh la sh la sh sbearly ra ra sblate Table 1: Transition sequences including sentence boundary transitions for the example in Figure 1. Oracle. Since each sentence constitutes its own subtree under the root node, a regular sentencebased oracle can be used to derive the oracle transition sequence for complete documents. Specifically, we apply the sentence-based oracle to each sentence, and then join the sequences with a SB transition in between. In the oracle sequences derived this way we can either apply the SB transition as late as possible, requiring the system to completely reduce the stack before introducing a sentence boundary. Alternatively, sentence boundaries can be introduced as early as possible, i.e., applying the SB transition as soon as b0 starts a new sentence. Table 1 shows this difference in the 1926 (a) sentences (b) documents Figure 3: Average length of training sequences used during training for early update and max violation. oracle transitions of each sentence from Figure 1. During preliminary experiments we compared the two alternatives and found that the early version performed better than the late.3 3 Learning We focus on training structured perceptrons for search-based dependency parsing. Here, the score of a parse is defined as the scalar product of a weight vector w and a global feature vector Φ. The feature vector in turn is defined as the sum of local feature vectors φ, corresponding to the features extracted for a single transition t given a configuration c. During prediction we thus aim to obtain ˆy = arg max y∈Y Φ(y) · w = arg max y∈Y X (t,c)∈TRSEQ(y) φ(t, c) · w (1) where TRSEQ represents the sequence of configurations and transitions executed to obtain the tree y. As the space of possible transition sequences is too large to search exhaustively, we use beam search for approximate search. Early Update. Using approximate search while training structured perceptrons is complicated by the fact that the correct solution may actually obtain the highest score given the current model, but it was pruned off by the search procedure and therefore never considered. Collins and Roark (2004) solve this by halting search as soon as 3One could also imagine leaving the decision of when to apply SB latent and let the machine learning decide. However, preliminary experiments again suggested that this strategy was inferior to the earliest possible point. the correct solution is pruned and then making an early update on the partial sequences. Intuitively this makes sense, since once the correct solution is no longer reachable, it makes no sense to continue searching. Max-violation updates. Huang et al. (2012) note that early updates require a considerable number of training iterations as it often discards a big portion of the training data. Moreover, they show that updates covering a greater subsequence can also be valid and define the max-violation update. Specifically, max-violation updates extend the beam beyond early updates and apply updates where the maximum difference in scores as defined by Equation (1) between the correct solution and the best prediction (i.e., the maximal violation) is used for update. They show that this leads to faster convergence compared to early update. The curse of long sequences. Neither early nor max-violation updates commit to using the full training sequence for updates. In standard sentence-level tasks such as part-of-speech tagging or sentence-based dependency parsing these updates suffice and reasonably quickly reach a level where all or almost all of the training sequences are used for training. An entire document, however, may be composed of tens or hundreds of sentences, leading to transition sequences that are orders of magnitude longer. To illustrate the difference between sentencelevel and document-level parsing Figure 3 shows plots of the average percentage of the gold training sequences that are being used as training progresses on the Switchboard training set. The left plot shows the parser trained on sen1927 tences, the right one when it is trained on documents (where it also has to predict sentence boundaries). On the sentence level we see that both update strategies quite quickly reach close to 100%, i.e., they see more or less complete transition sequences during training. On the document level the picture is considerably different. The average length of seen transition sequences never even goes above 50%. In other words, more than half of the training data is never used. Early update shows a slow increase over time, presumably because the parser sees a bit more of every training instance at every iteration and therefore advances. However, max violation starts off using much more training data than early update, but then drops and settles around 20%. This illustrates the fact that max violation does not commit to exploiting more training data, but rather selects the update which constitutes the maximum violation irrespective of how much of the instance is being used. Empirically, even though the percentage of used training data decreases over iterations, max violation is still profiting from more iterations in the document-level task (cf. Figure 4 in Section 4). Delayed LaSO. To solve the problem with the discarded training data, we follow Bj¨orkelund and Kuhn (2014) and apply the DLASO4 update. This idea builds on early update, but crucially differs in the sense that the remainder of a training sequence is not discarded when a mistake is made. Rather, the corresponding update is stored and the beam is reseeded with the correct solution. This enables the learning algorithm to exploit the full training data while still making sound updates (or, using the terminology of Huang et al. (2012), a number of updates that are all violations). Pseudocode for DLASO is shown in Algorithm 1. Similar to early update it performs beam search until the correct item falls off the beam (lines 9-12). Here, early update would halt, update the weights w and move on to the next instance. Instead, DLASO computes the corresponding update, i.e., a change in the w, and stores it away. It then resets the beam to the correct solution ci, and continues beam search. This procedure is repeated until the end of a sequence, with a final check for correctness after search has finished (line 15). After a complete pass-through of the training instance an update is made if any updates were recorded during beam search (lines 17-18). 4Delayed Learning as Search Optimization Algorithm 1 DLaSO Input: Training data D = {(xi, yi)}n i=1, epochs T, beam size B. Output: Weight vector w. 1: w = 0 2: for t ∈1..T do 3: for (x, y) ∈D do 4: c0..n = ORACLE(y) 5: Beam = {c0} 6: Updates = {} 7: for i ∈1 .. (n −1) do 8: Beam = EXPANDANDFILTER(Beam, B) 9: if ci ̸∈Beam then 10: ˆy = BEST(Beam) 11: Updates = Updates ∪CALCUPDATE(ci, ˆy) 12: Beam = {ci} 13: Beam = EXPANDANDFILTER(Beam, B) 14: ˆy = BEST(Beam) 15: if cn ̸= ˆy then 16: Updates = Updates ∪CALCUPDATE(cn, ˆy) 17: if |Updates| > 0 then 18: w = APPLYUPDATES(w, Updates) 19: return w The DLASO update is closely related to LASO (Daum´e III and Marcu, 2005), but differs in that it delays the updates until the full instance has been decoded. Bj¨orkelund and Kuhn (2014) show that the difference is important, as it prevents the learning algorithm from getting feedback within instances. Without the delay the learner can bias the weights for rare (e.g., lexicalized) features that occur within a single instance which renders the learning setting quite different from test time inference where no such feedback is available. 4 Experimental setup Data sets. We experiment with two parts of the English Penn Treebank (Marcus et al., 1993). We use the Wall Street Journal (WSJ) as an example of copy-edited newspaper-quality texts with proper punctuation and capitalized sentences. We also use the Switchboard portion which consists of (transcribed) telephone conversations between strangers. Following previous work on Switchboard we lowercase all text and remove punctuation and disfluency markups. We use sections 2-21 of the WSJ for training, 24 as development set and 23 as test set. For Switchboard we follow Charniak and Johnson (2001). We convert both data sets to Stanford dependencies with the Stanford dependency converter (de Marneffe et al., 2006). We predict part-of-speech tags with the CRF tagger MARMOT (M¨uller et al., 2013) and annotate the training sets via 10-fold jackknifing. Depending on the experimental sce1928 nario we use MARMOT in two different settings – standard sentence-level where we train and apply it on sentences, and document-level where a whole document is fed to the tagger, implicitly treating it as a single very long sentence. Sentence boundary detection. We work with two well-established sentence boundary detection baselines. Following (Read et al., 2012) we use the tokenizer from the Stanford CoreNLP (Manning et al., 2014) and the sentence boundary detector from OpenNLP5 which has been shown to achieve state-of-the-art results on WSJ. We evaluate the performance of sentence boundary detection on the token level using F-measure (F1).6 Typical sentence boundary detectors such as CORENLP or OPENNLP focus on punctuation marks and are therefore inapplicable to data like Switchboard that does not originally include punctuation. In such cases CRF taggers are commonly selected as baselines, e.g. for punctuation prediction experiments (Zhang et al., 2013a). We therefore introduce a third baseline using MARMOT. For this, we augment the POS tags with information to indicate if a token starts a new sentence or not. We prepare the training data accordingly and train the document-level sequence labeler on them. Table 2 shows the accuracies of all baseline systems on the development sets. For WSJ all three algorithms achieve similar results which shows that MARMOT is a competitive baseline. As can be seen, predicting sentence boundaries for the Switchboard dataset is a more difficult task than for well-formatted text like the WSJ. WSJ Switchboard OPENNLP 98.09 – CORENLP 98.60 – MARMOT 98.21 71.78 Table 2: Results (F1) for baselines for sentence boundary detection on dev sets. Parser implementation. Our parser implements the labeled version of the transition system described in Section 2 with a default beam size of 20. We use the oracle by Nivre et al. (2009) to create transition sequences for each sentence of a document, and then concatenate them with SB transitions that occur as early as possible (cf. 5http://opennlp.apache.org 6A true positive is defined as a token that was correctly predicted to begin a new sentence. Section 2). The feature set is based on previous work (Zhang and Nivre, 2011; Bohnet and Kuhn, 2012; Bohnet et al., 2013) and was developed for a sentence-based parser for the WSJ. We made initial experiments trying to introduce new features aimed at capturing sentence boundaries such as trying to model verb subcategorization or sentence length, however none of these proved useful compared to the baseline feature set. Following the line of work by Bohnet et al., we use the passiveaggressive algorithm (Crammer et al., 2006) instead of the vanilla perceptron, parameter averaging (Collins, 2002), and a hash function to map features (Bohnet, 2010).7 5 Analysis Comparison of training methods. Figure 4 shows learning curves of the different training algorithms where sentence boundary F1 and parsing accuracy LAS are plotted as a function of training iterations. The plots show performance for early update, max-violation, and DLASO updates. In addition, a greedy version of the parser is also included. The greedy parser uses a plain averaged perceptron classifier that is trained on all the training data. The straight dashed line corresponds to the MARMOT baseline. While the greedy parser, DLASO, and the MARMOT baseline all exploit the full training data during training, early update and maxviolation do not (as shown in Section 3). This fact has a direct impact on the performance of these systems. DLASO reaches a plateau rather quickly, whereas even after 100 iterations, early update and max-violation perform considerably worse.8 We also see that the greedy parser quickly reaches an optimum and then starts degrading, presumably due to overfitting. It is noteworthy, however, that max-violation needs something between 40 to 60 iterations until it reaches a level similar to the greedy parsers optimal value. This effect is quite different from single sentence parsing scenarios, where it is known that beam search parsers easily outperform the greedy counterparts, requiring not nearly as many training iterations. 7We make the source code of the parser available on the first author’s website. 8The x-axis is cut at 100 iterations for simplicity. Although early update and max-violation still are growing at this point, the overall effect does not change – even after 200 iterations the DLASO update outperforms the other two by at least two points absolute. 1929 (a) sentence boundary detection (b) parsing Figure 4: Performance of different update strategies on the Switchboard development set. Figure 5: The effect of increasing beam size. Increasing the beam size. Intuitively, using a bigger beam size might alleviate the problem of discarded training data and enable max-violation to exploit more training data. Figure 5 shows the sentence boundary F1 as a function of training iterations for different beam sizes for DLASO and max-violation. For DLASO, we see that a bigger beam provides a slight improvement. Maxviolation shows a greater correlation between greater beam size and improved F1. However, even with a beam of size 100 max-violation is nowhere near DLASO. In theory a beam size orders of magnitude greater may rival DLASO but as the beam size directly influences the time complexity of the parser, this is not a viable option. Does syntax help? One of the underlying assumptions of the joint model is our expectation that access to syntactic information should support the model in finding the sentence boundaries. We have already seen that the joint parser outperforms the MARMOT baseline by a big margin in terms of sentence boundary F1 (Figure 4). However, the comparison is not entirely fair as the two systems use different feature sets and learning algorithms. To properly measure the effect of syntactic information on the sentence boundary detection task, we therefore trained another model for the joint system on a treebank where we replaced the gold-standard trees with trivial trees that connect the last token of each sentence to the root node, and everything in between as a left-branching chain. We dub this setting NOSYNTAX and it allows us to use exactly the same machine learning for a fair comparison between a system that has access to syntax and one without. As the syntactic complexity in Switchboard is rather low, we compare these two systems also on a version of the WSJ where we removed all punctuation and lower-cased all words, effectively making it identical to the Switchboard setting (henceforth WSJ∗). Figure 6 shows sentence boundary F1 over training iterations when training with and without access to syntax. On both data sets, the system with access to syntax stays consistently above the other system. The striking difference between the data sets is that syntactic information has a much bigger impact on WSJ∗than on Switchboard, which we attribute to the higher syntactic complexity of newswire text. Overall the comparison shows clearly that syntactic structure provides useful information to the task of sentence boundary detection. 1930 (a) Switchboard (b) WSJ∗ Figure 6: The effect of syntactic information on sentence boundary prediction, on dev sets. 6 Final results Sentence boundary detection. We optimize the number of iterations on the dev sets: for the joint model we take the iteration with the highest average between F1 and LAS, NOSYNTAX is tuned according to F1. Table 3 gives the performance of the sentence boundary detectors on test sets.9 On WSJ all systems are close to 98 and this high number once again affirms that the task of segmenting newspaper-quality text does not leave much space for improvement. Although the parsing models outperform MARMOT, the improvements in F1 are not significant. In contrast, all systems fare considerably worse on WSJ∗which confirms that the orthographic clues in newspaper text suffice to segment the sentences properly. Although NOSYNTAX outperforms MARMOT, the difference is not significant. However, when real syntax is used (JOINT) we see a huge improvement in F1 – 10 points absolute – which is significantly better than both NOSYNTAX and MARMOT. On Switchboard MARMOT is much lower and both parsing models outperform it significantly. Surprisingly the NOSYNTAX system achieves a very high result beating the baseline significantly by almost 4.5 points. The usage of syntax in the JOINT model raises this gain to 4.8 points. 9We test for significance using the Wilcoxon signed-rank test with p < 0.01. † and ‡ denote significant increases over MARMOT and NOSYNTAX, respectively. ∗denotes significant increases over JOINT (Table 4). WSJ Switchboard WSJ∗ MARMOT 97.64 71.87 53.02 NOSYNTAX 98.21 76.31† 55.15 JOINT 98.21 76.65† 65.34†‡ Table 3: Sentence boundary detection results (F1) on test sets. Parsing. In order to evaluate the joint model on the parsing task separately we compare it to pipeline setups. We train a basic parser on single sentences using gold standard sentence boundaries, predicted POS tags and max-violation updates (GOLD). The number of training iterations is tuned to optimize LAS on the dev set. This parser is used as the second stage in the pipeline models. Additionally, we also build a pipeline where we use JOINT only as a sentence segmenter and then parse once again (denoted JOINT-REPARSED). Table 4 shows the results on the test sets. For WSJ, where sentence segmentation is almost trivial, we see only minor drops in LAS between GOLD and the systems that use predicted sentence boundaries. Among the systems that use predicted boundaries, no differences are significant. WSJ Switchboard WSJ∗ GOLD 90.22 84.99 88.71 MARMOT 89.81 78.93 83.37 NOSYNTAX 89.95 80.30† 83.61 JOINT 89.71 79.97† 85.66†‡ JOINT-REPARSED 89.93 80.61†‡∗ 85.38†‡ Table 4: Parsing results (LAS) on test sets for different sentence boundaries. For WSJ∗and Switchboard the picture is much different. Compared to GOLD, all systems show considerable drops in accuracy which asserts that errors from the sentence boundary detection task propagate to the parser and worsen the parser accuracy. On Switchboard the parsers yield significantly better results than MARMOT. The best result is obtained after reparsing and this is also significantly better than any other system. Although there is a slight drop in accuracy between NOSYNTAX and JOINT, this difference is not significant. The results on WSJ∗show that not only does syntax help to improve sentence segmentation, it does so to a degree that parsing results deteriorate when simpler sentence boundary detectors are used. Here, both JOINT and JOINT-REPARSED 1931 obtain significantly better parsing accuracies than the systems that do not have access to syntax during sentence boundary prediction. Although JOINT-REPARSED performs a bit worse, the difference compared to JOINT is not significant. 7 Related work Zhang and Clark (2008) first showed how to train transition-based parsers with the structured perceptron (Collins, 2002) using beam search and early update (Collins and Roark, 2004). It has since become the de facto standard way of training search-based transition-based dependency parsers (Huang and Sagae, 2010; Zhang and Nivre, 2011; Bohnet et al., 2013). Huang et al. (2012) showed how max-violation leads to faster convergence for transition-based parsers and max-violation updates have subsequently been applied to other tasks such as machine translation (Yu et al., 2013) and semantic parsing (Zhao and Huang, 2015). Sentence Boundary Detection. Sentence boundary detection has attracted only modest attention by the research community even though it is a component in every real-world NLP application. Previous work is divided into rule-based, e.g., CoreNLP (Manning et al., 2014), and machine learning approaches (e.g., OpenNLP, a re-implementation of Reynar and Ratnaparkhi (1997)’s MxTerminator). The task is often simplified to the task of period disambiguation (Kiss and Strunk, 2006), which only works on text that uses punctuation consistently. The current state of the art uses sequence labelers, e.g., a CRF (Evang et al., 2013; Dridan and Oepen, 2013). For a broad survey of methodology and tools, we refer the reader to Read et al. (2012). Joint models. Solving several tasks jointly has lately been popular in transition-based parsing, e.g., combining parsing with POS tagging (Hatori et al., 2011; Bohnet and Nivre, 2012) and tokenization (Zhang et al., 2013b; Zhang et al., 2014). Joint approaches avoid error propagation between the subtasks and often lead to overall better models, especially for the lower level tasks that suddenly have access to syntactic information. Our transition system is inspired by the work of Zhang et al. (2013a). They present a projective transition-based parser that jointly predicts punctuation and syntax. Their ArcEager transition system (Nivre, 2003) includes an additional transition that introduces punctuation similar to our SB transition. They also use beam search and circumvent the problem of long training sequences by chopping up the training data into pseudo-documents of at most 10 sentences. As we have shown, this solution works because the training instances are not long enough to hurt the performance. However, while this is possible for parsing, other tasks may not be able to chop up their training data. 8 Conclusion We have demonstrated that training a structured perceptron for inexact search on very long input sequences ignores significant portions of the training data when using early update or max-violation. We then showed how this effect can be avoided by applying a different update strategy, DLASO, which leads to considerably better models in significantly less time. This effect only occurs when the training instances are very long, e.g., on whole documents, but not when training on single sentences. We also showed that the lower performance of early update and max-violation cannot be compensated for by increasing beam size or number of iterations. We compared our system for joint sentence boundary detection and dependency parsing to competitive pipeline systems showing that syntax can provide valuable information to sentence boundary prediction when punctuation and capitalization is not available. Acknowledgments This work was supported by the German Research Foundation (DFG) in project D8 of SFB 732. We also thank spaCy GmbH for making it possible for the third author to finish his contribution to this work. References Anders Bj¨orkelund and Jonas Kuhn. 2014. Learning Structured Perceptrons for Coreference Resolution with Latent Antecedents and Non-local Features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 47–57, Baltimore, Maryland, June. Association for Computational Linguistics. Bernd Bohnet and Jonas Kuhn. 2012. The best of bothworlds – a graph-based completion model for transition-based parsers. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 77–87, 1932 Avignon, France, April. Association for Computational Linguistics. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1455–1465, Jeju Island, Korea, July. Association for Computational Linguistics. Bernd Bohnet, Joakim Nivre, Igor Boguslavsky, Rich´ard Farkas, Filip Ginter, and Jan Hajiˇc. 2013. Joint morphological and syntactic analysis for richly inflected languages. Transactions of the Association for Computational Linguistics, 1:415–428. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 89–97, Beijing, China, August. Coling 2010 Organizing Committee. Eugene Charniak and Mark Johnson. 2001. Edit detection and parsing for transcribed speech. In In Proc. NAACL, pages 118–126. Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 111–118, Barcelona, Spain, July. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1–8. Association for Computational Linguistics, July. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive–aggressive algorithms. Journal of Machine Learning Reseach, 7:551–585, March. Hal Daum´e III and Daniel Marcu. 2005. Learning as search optimization: approximate large margin methods for structured prediction. In ICML, pages 169–176. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC, pages 449–454. Rebecca Dridan and Stephan Oepen. 2013. Document parsing: Towards realistic syntactic analysis. In Proceedings of the 13th International Conference on Parsing Technologies, Nara, Japan. Kilian Evang, Valerio Basile, Grzegorz Chrupała, and Johan Bos. 2013. Elephant: Sequence labeling for word and sentence segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1422–1426, Seattle, Washington, USA, October. Association for Computational Linguistics. Yoav Goldberg, Kai Zhao, and Liang Huang. 2013. Efficient implementation of beam-search incremental parsers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 628–633, Sofia, Bulgaria, August. Association for Computational Linguistics. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2011. Incremental joint pos tagging and dependency parsing in chinese. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1216–1224, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1077– 1086, Uppsala, Sweden, July. Association for Computational Linguistics. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured Perceptron with Inexact Search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–151, Montr´eal, Canada, June. Association for Computational Linguistics. Tibor Kiss and Jan Strunk. 2006. Unsupervised multilingual sentence boundary detection. Computational Linguistics, 32(4):485–525. J´achym Kol´aˇr, Elizabeth Shriberg, and Yang Liu. 2006. Using prosody for automatic sentence segmentation of multi-party meetings. In Text, Speech and Dialogue, pages 629–636. Springer. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS, 19(2):313–330. Thomas M¨uller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322–332, Seattle, Washington, USA, October. Association for Computational Linguistics. 1933 Joakim Nivre, Marco Kuhlmann, and Johan Hall. 2009. An improved oracle for dependency parsing with online reordering. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09), pages 73–76, Paris, France, October. Association for Computational Linguistics. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT), pages 149–160. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351–359, Suntec, Singapore, August. Association for Computational Linguistics. Jonathon Read, Rebecca Dridan, Stephan Oepen, and Lars Jørgen Solberg. 2012. Sentence boundary detection: A long solved problem? In Proceedings of COLING 2012: Posters, pages 985–994, Mumbai, India, December. The COLING 2012 Organizing Committee. Jeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 16– 19, Washington, DC, USA, March. Association for Computational Linguistics. Heng Yu, Liang Huang, Haitao Mi, and Kai Zhao. 2013. Max-violation perceptron and forced decoding for scalable MT training. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1112–1123, Seattle, Washington, USA, October. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2008. A Tale of Two Parsers: Investigating and Combining Graph-based and Transition-based Dependency Parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 562– 571, Honolulu, Hawaii, October. Association for Computational Linguistics. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 188–193, Portland, Oregon, USA, June. Association for Computational Linguistics. Dongdong Zhang, Shuangzhi Wu, Nan Yang, and Mu Li. 2013a. Punctuation prediction with transition-based parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 752–760, Sofia, Bulgaria, August. Association for Computational Linguistics. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013b. Chinese parsing exploiting characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 125–134, Sofia, Bulgaria, August. Association for Computational Linguistics. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Character-level chinese dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1326–1336, Baltimore, Maryland, June. Association for Computational Linguistics. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1416–1421, Denver, Colorado, May– June. Association for Computational Linguistics. 1934
2016
181
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1935–1943, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics MUTT: Metric Unit TesTing for Language Generation Tasks Willie Boag, Renan Campos, Kate Saenko, Anna Rumshisky Dept. of Computer Science University of Massachusetts Lowell 198 Riverside St, Lowell, MA 01854 {wboag,rcampos,saenko,arum}@cs.uml.edu Abstract Precise evaluation metrics are important for assessing progress in high-level language generation tasks such as machine translation or image captioning. Historically, these metrics have been evaluated using correlation with human judgment. However, human-derived scores are often alarmingly inconsistent and are also limited in their ability to identify precise areas of weakness. In this paper, we perform a case study for metric evaluation by measuring the effect that systematic sentence transformations (e.g. active to passive voice) have on the automatic metric scores. These sentence “corruptions” serve as unit tests for precisely measuring the strengths and weaknesses of a given metric. We find that not only are human annotations heavily inconsistent in this study, but that the Metric Unit TesT analysis is able to capture precise shortcomings of particular metrics (e.g. comparing passive and active sentences) better than a simple correlation with human judgment can. 1 Introduction The success of high-level language generation tasks such as machine translation (MT), paraphrasing and image/video captioning depends on the existence of reliable and precise automatic evaluation metrics. Figure 1: A few select entries from the SICK dataset. All of these entries follow the same “Negated Subject” transformation between sentence 1 and sentence 2, yet humans annotated them with an inconsistently wide range of scores (from 1 to 5). Regardless of whether the gold labels for this particular transformation should score this high or low, they should score be scored consistently. Efforts have been made to create standard metrics (Papineni et al., 2001; Lin, 2004; Denkowski and Lavie, 2014; Vedantam et al., 2014) to help advance the state-of-the-art. However, most such popular metrics, despite their wide use, have serious deficiencies. Many rely on ngram matching and assume that annotators generate all reasonable reference sentences, which is infeasible for many tasks. Furthermore, metrics designed for one task, e.g., MT, can be a poor fit for other tasks, e.g., video captioning. To design better metrics, we need a principled approach to evaluating their performance. Historically, MT metrics have been evaluated by how well they correlate with human annotations (CallisonBurch et al., 2010; Machacek and Bojar, 2014). However, as we demonstrate in Sec. 5, human judgment can result in inconsistent scoring. This presents a serious problem for determining whether 1935 a metric is ”good” based on correlation with inconsistent human scores. When ”gold” target data is unreliable, even good metrics can appear to be inaccurate. Furthermore, correlation of system output with human-derived scores typically provides an overall score but fails to isolate specific errors that metrics tend to miss. This makes it difficult to discover system-specific weaknesses to improve their performance. For instance, an ngram-based metric might effectively detect non-fluent, syntactic errors, but could also be fooled by legitimate paraphrases whose ngrams simply did not appear in the training set. Although there has been some recent work on paraphrasing that provided detailed error analysis of system outputs (Socher et al., 2011; Madnani et al., 2012), more often than not such investigations are seen as above-and-beyond when assessing metrics. The goal of this paper is to propose a process for consistent and informative automated analysis of evaluation metrics. This method is demonstrably more consistent and interpretable than correlation with human annotations. In addition, we extend the SICK dataset to include un-scored fluency-focused sentence comparisons and we propose a toy metric for evaluation. The rest of the paper is as follows: Section 2 introduces the corruption-based metric unit testing process, Section 3 lists the existing metrics we use in our experiments as well as the toy metric we propose, Section 4 describes the SICK dataset we used for our experiments, Section 5 motivates the need for corruption-based evaluation instead of correlation with human judgment, Section 6 describes the experimental procedure for analyzing the metric unit tests, Section 7 analyzes the results of our experiments, and in Section 8 we offer concluding remarks. 2 Metric Unit TesTs We introduce metric unit tests based on sentence corruptions as a new method for automatically evaluating metrics developed for language generation tasks. Instead of obtaining human ranking for system output and comparing it with the metricbased ranking, the idea is to modify existing references with specific transformations, and examine the scores assigned by various metrics to such corruptions. In this paper, we analyze three broad categories of transformations – meaning-altering, meaning-preserving, and fluency-disrupting sentence corruptions – and we evaluate how successfully several common metrics can detect them. As an example, the original sentence “A man is playing a guitar.” can be corrupted as follows: Meaning-Altering: A man is not playing guitar. Meaning-Preserving: A guitar is being played by a man. Fluency-Disrupting: A man a guitar is playing. Examples for each corruption type we consider are shown in Tables 1 and 2. 2.1 Meaning-altering corruptions Meaning-altering corruptions modify the semantics of a sentence, resulting in a new sentence that has a different meaning. Corruptions (1– 2) check whether a metric can detect small lexical changes that cause the sentence’s semantics to entirely change. Corruption (3) is designed to fool distributed and distributional representations of words, whose vectors often confuse synonyms and antonyms. 2.2 Meaning-preserving corruptions Meaning-preserving corruptions change the lexical presentation of a sentence while still preserving meaning and fluency. For such transformations, the “corruption” is actually logically equivalent to the original sentence, and we would expect that consistent annotators would assign roughly the same score to each. These transformations include changes such as rephrasing a sentence from active voice to passive voice (4) or paraphrasing within a sentence (5). 2.3 Fluency disruptions Beyond understanding semantics, metrics must also recognize when a sentence lacks fluency and grammar. Corruptions (7–9) were created for this reason, and do so by generating ungrammatical sentences. 1936 Meaning Altering 1 negated subject (337) “A man is playing a harp” “There is no man playing a harp” 2 negated action (202) “A jet is flying” “A jet is not flying” 3 antonym replacement (246) “a dog with short hair” “a dog with long hair” Meaning Preserving 4 active-to-passive (238) “A man is cutting a potato” “A potato is being cut by a man” 5 synonymous phrases (240) “A dog is eating a doll” “A dog is biting a doll” 6 determiner substitution (65) “A cat is eating food” “The cat is eating food” Table 1: Corruptions from the SICK dataset. The left column lists the number of instances for each corruption type. Fluency disruptions 7 double PP (500) “A boy walks at night” “A boy walks at night at night” 8 remove head from PP (500) “A man danced in costume” “A man danced costume” 9 re-order chunked phrases (500) “A woman is slicing garlics” “Is slicing garlics a woman” Table 2: Generated corruptions. The first column gives the total number of generated corruptions in parentheses. 3 Metrics Overview 3.1 Existing Metrics Many existing metrics work by identifying lexical similarities, such as n-gram matches, between the candidate and reference sentences. Commonly-used metrics include BLEU, CIDEr, and TER: • BLEU, an early MT metric, is a precisionbased metric that rewards candidates whose words can be found in the reference but penalizes short sentences and ones which overuse popular n-grams (Papineni et al., 2001). • CIDEr, an image captioning metric, uses a consensus-based voting of tf-idf weighted ngrams to emphasize the most unique segments of a sentence in comparison (Vedantam et al., 2014). • TER (Translation Edit Rate) counts the changes needed so the surface forms of the output and reference match (Snover et al., 2006). Other metrics have attempted to capture similarity beyond surface-level pattern matching: • METEOR, rather than strictly measuring ngram matches, accounts for soft similarities between sentences by computing synonym and paraphrase scores between sentence alignments (Denkowski and Lavie, 2014). • BADGER takes into account the contexts over the entire set of reference sentences by using a simple compression distance calculation after performing a series of normalization steps (Parker, 2008). • TERp (TER-plus) minimizes the edit distance by stem matches, synonym matches, and phrase substitutions before calculating the TER score, similar to BADGER’s normalization step (Snover et al., 2009). We evaluate the strengths and weaknesses of these existing metrics in Section 7. 3.2 Toy Metric: W2V-AVG To demonstrate how this paper’s techniques can also be applied to measure a new evaluation metric, we create a toy metric, W2V-AVG, using the cosine of the centroid of a sentence’s word2vec embeddings (Mikolov et al., 2013). The goal for this true bagof-words metric is to serve as a sanity check for how corruption unit tests can identify metrics that capture soft word-level similarities, but cannot handle directed relationships between entities. 4 Datasets 4.1 SICK All of our experiments are run on the Sentences Involving Compositional Knowledge (SICK) dataset, 1937 which contains entries consisting of a pair of sentences and a human-estimated semantic relatedness score to indicate the similarity of the sentence pair (Marelli et al., 2014). The reason we use this data is twofold: 1. it is a well-known and standard dataset within semantic textual similarity community. 2. it contains many common sentence transformation patterns, such as those described in Table 1. The SICK dataset was built from the 8K ImageFlickr dataset1 and the SemEval 2012 STS MSRVideo Description corpus2. Each of these original datasets contain human-generated descriptions of images/videos – a given video often has 20-50 reference sentences describing it. These reference sets prove very useful because they are more-or-less paraphrases of one another; they all describe the same thing. The creators of SICK selected sentence pairs and instructed human annotators to ensure that all sentences obeyed proper grammar. The creators of SICK ensured that two of the corruption types – meaning-altering and meaning-preserving – were generated in the annotated sentence pairs. We then filtered through SICK using simple rule-based templates3 to identify each of the six corruption types listed in Table 1. Finally, we matched the sentences in the pair back to their original reference sets in the Flickr8 and MSR-Video Description corpora to obtain reference sentences for our evaluation metrics experiments. 4.2 SICK+ Since all of the entries in the SICK dataset were created for compositional semantics, every sentence was manually checked by annotators to ensure fluency. For our study, we also wanted to measure 1http://nlp.cs.illinois.edu/ HockenmaierGroup/data.html 2http://www.cs.york.ac.uk/ semeval-2012/task6/index.php?id=data 3For instance, the “Antonym Replacement” template checked to see if the two sentences were one word apart, and if so whether they had a SICK-annotated NOT ENTAILMENT relationsip. the effects of bad grammar between sentences, so we automatically generated our own corruptions to the SICK dataset to create SICK+, a set of fluency-disrupting corruptions. The rules to generate these corruptions were simple operations involving chunking and POS-tagging. Fortunately, these corruptions were, by design, meant to be ungrammatical, so there was no need for (difficult) automatic correctness checking for fluency. 5 Inconsistencies in Human Judgment A major issue with comparing metrics against human judgment is that human judgments are often inconsistent. One reason for this is that high-level semantic tasks are difficult to pose to annotators. Consider SICK’s semantic relatedness annotation as a case study for human judgment. Annotators were shown two sentences, were asked “To what extent are the two sentences expressing related meaning?”, and were instructed to select an integer from 1 (completely unrelated) to 5 (very related). We can see the difficulty annotators faced when estimating semantic relatedness, especially because the task description was intentionally vague to avoid biasing annotator judgments with strict definitions. In the end, “the instructions described the task only through [a handful of] examples of relatedness” (Marelli et al., 2014). As a result of this hands-off annotation guideline, the SICK dataset contains glaring inconsistencies in semantic relatedness scores, even for sentence pairs where the only difference between two sentences is due to the same known transformation. Figure 1 demonstrates the wide range of human-given scores for the pairs from SICK that were created with the Negated Subject transformation. Since the guidelines did not establish how to handle the effect of this transformation, some annotators rated it high for describing the same actions, while others rated it low for having completely opposite subjects. To better appreciate the scope of these annotation discrepancies, Figure 2 displays the distribution of “gold” human scores for every instance of the “Negated Subject” transformation. We actually find that the relatedness score approximately follows a normal distribution centered at 3.6 with a standard 1938 Figure 2: Human annotations for the Negated Subject corruption. Figure 3: Metric predictions for the Negated Subject corruption. deviation of about .45. The issue with this distribution is that regardless of whether the annotators rank this specific transformation as low or high, they should be ranking it consistently. Instead, we see that their annotations span all the way from 2.5 to 4.5 with no reasonable justification as to why. Further, a natural question to ask is whether all sentence pairs within this common Negated Subject transformation do, in fact, share a structure of how similar their relatedness scores “should” be. To answer this question, we computed the similarity between the sentences in an automated manner using three substantially different evaluation metrics: METEOR, BADGER, and TERp. These three metrics were chosen because they present three very different approaches for quantifying semantic similarity, namely: sentence alignments, compression redundancies, and edit rates. We felt that these different approaches for processing the sentence pairs would allow for different views of their underlying relatedness. To better understand how similar an automatic metric would rate these sentence pairs, Figure 3 shows the distribution over scores predicted by the METEOR metric. The first observation is that the metric produces scores that are far more peaky than the gold scores in Figure 2, which indicates that they have a significantly more consistent structure about them. In order to see how each metric’s scores compare, Table 3 lists all pairwise correlations between the gold and the three metrics. As a sanity check, we can see that the 1.0s along the diagonal indicate perfect correlation between a prediction and itself. More interestingly, we can see that the three metrics have alarmingly low correlations with the gold scores: 0.09, 0.03, and 0.07. However, we also see that the three metrics all have significantly higher correlations amongst one another: 0.80, 0.80, and 0.91. This is a very strong indication that the three metrics all have approximate agreement about how the various sentences should be scored, but this consensus is not at all reflected by the human judgments. 6 MUTT Experiments In our Metric Unit TesTing experiments, we wanted to measure the fraction of times that a given metric is able to appropriately handle a particular corruption type. Each (original,corruption) pair is considered a trial, which the metric either gets correct or gold METEOR BADGER TERp gold 1.00 0.09 0.03 0.07 METEOR 0.09 1.00 0.91 0.80 BADGER 0.03 0.91 1.00 0.80 TERp 0.07 0.80 0.80 1.00 Table 3: Pairwise correlation between the predictions of three evaluation metrics and the gold standard. 1939 Figure 4: Results for the Determiner Substitution corruption (using Difference formula scores). incorrect. We report the percent of successful trials for each metric in Tables 4, 5, and 6. Experiments were run using 5, 10, and 20 reference sentences to understand which metrics are able perform well without much data and also which metrics are able to effectively use more data to improve. An accuracy of 75% would indicate that the metric is able to assign appropriate scores 3 out of 4 times.4 For Meaning-altering and Fleuncy-disrupting corruptions, the corrupted sentence will be truly different from the original and reference sentences. A trial would be successful when the score of the original sorig is rated higher than the score of the corruption scorr: sorig > scorr Alternatively, Meaning-preserving transformations create a ”corruption” sentence which is just as correct as the original. To reflect this, we consider a trial to be successful when the score of the corruption scorr is within 15% of the score of the original sorig: sorig −scorr sorig + ϵ ≤0.15 where ϵ is a small constant (10−9) to prevent division by zero. We refer to this alternative trial formulation as the Difference formula. 4Our code is made available at https://github. com/text-machine-lab/MUTT Figure 5: Results for the Active-to-Passive corruption (using Difference formula scores). 7 Discussion 7.1 Meaning-altering corruptions As shown by the middle figure in Table 4, it is CIDEr which performs the best for Antonym Replacement. Even with only a few reference sentences, it is already able to score significantly higher than the other metrics. We believe that a large contributing factor for this is CIDEr’s use of tf-idf weights to emphasize the important aspects of each sentence, thus highlighting the modified when compared against the reference sentences. The success of these metrics reiterates the earlier point about metrics being able to perform more consistently and reliably than human judgment. 7.2 Meaning-preserving corruptions The graph in Figure 4 of the determiner substitution corruption shows an interesting trend: as the number of references increase, all of the metrics increase in accuracy. This corruption replaces “a” in the candidate with a “the’, or vice versa. As the references increase, there we tend to see more examples which use these determiners interchanagably while keeping the rest of the sentence’s meaning the same. Since a large number of references results in far more for the pair to agree on, the two scores are very close. Conversely, the decrease in accuracy in the 1940 1. Negated Subject num refs 5 10 20 CIDEr 99.4 99.4 99.4 BLEU 99.1 99.7 99.7 METEOR 97.0 98.5 98.2 BADGER 97.9 97.6 98.2 TERp 99.7 99.7 99.4 2. Negated Action num refs 5 10 20 CIDEr 98.5 98.5 98.5 BLEU 97.5 97.5 98.0 METEOR 96.0 96.0 97.0 BADGER 93.6 95.5 96.5 TERp 95.5 97.0 95.0 3. Antonym Replacement num refs 5 10 20 CIDEr 86.2 92.7 93.5 BLEU 76.4 85.4 88.6 METEOR 80.9 86.6 91.5 BADGER 76.0 85.8 88.6 TERp 75.2 79.7 80.1 Table 4: Meaning-altering corruptions. These % accuracies represent the number of times that a given metric was able to correctly score the original sentence higher than the corrupted sentence. Numbers referenced in the prose analysis are highlighted in bold. 4. Active-to-Passive num refs 5 10 20 CIDEr 5.5 0.8 2.5 BLEU 7.6 4.6 3.8 METEOR 23.9 16.0 13.0 BADGER 13.4 11.3 12.2 TERp 20.6 16.4 9.7 5. Synonymous Phrases num refs 5 10 20 CIDEr 32.1 26.2 30.0 BLEU 45.0 36.7 34.2 METEOR 62.1 62.1 62.1 BADGER 80.8 80.4 86.7 TERp 53.3 46.7 41.2 6. DT Substitution num refs 5 10 20 CIDEr 40.0 38.5 56.9 BLEU 21.5 27.7 53.8 METEOR 55.4 55.4 70.8 BADGER 80.0 84.6 95.4 TERp 6.2 10.8 27.7 Table 5: Meaning-preserving corruptions. These % accuracies represent the number of times that a given metric was able to correctly score the semantically-equaivalent ”corrupted” sentence within 15% of the original sentence. Numbers referenced in the prose analysis are highlighted in bold. 7. Duplicate PP num refs 5 10 20 CIDEr 100 99.0 100 BLEU 100 100 100 METEOR 95.1 98.5 99.5 BADGER 63.5 70.0 74.9 TERp 96.6 99.0 99.0 8. Remove Head From PP num refs 5 10 20 CIDEr 69.5 76.8 80.8 BLEU 63.5 81.3 87.7 METEOR 60.6 72.9 84.2 BADGER 63.1 67.0 71.4 TERp 52.7 66.5 70.4 9. Re-order Chunks num refs 5 10 20 CIDEr 91.4 95.6 96.6 BLEU 83.0 91.4 94.2 METEOR 81.2 89.6 92.4 BADGER 95.4 96.6 97.8 TERp 91.0 93.4 93.4 Table 6: Fluency-disrupting corruptions. These % accuracies represent the number of times that a given metric was able to correctly score the original sentence higher than the corrupted sentence. Numbers referenced in the prose analysis are highlighted in bold. Active-to-Passive table reflects how adding more (mostly active) references makes the system more (incorrectly) confident in choosing the active original. As the graph in Figure 5 shows, METEOR performed the best, likely due to its sentence alignment approach to computing scores. 7.3 Fluency disruptions All of the metrics perform well at identifying the duplicate prepositional phrase corruption, except for BADGER which has noticeably lower accuracy scores than the rest. These lower scores may be attributed to the compression algorithm that it uses to compute similarity. Because BADGER’s algorithm works by compressing the candidate and references jointly, we can see why a repeated phrase would be of little effort to encode – it is a compression algorithm, after all. The result of easy-to-compress redundancies is that the original sentence and its corruption have very similar scores, and BADGER gets fooled. Unlike the other two fluency-disruptions, none of 1941 the accuracy scores of the “Remove Head from PP” corruption reach 90%, so this corruption could be seen as one that metrics could use improvement on. BLEU performed the best on this task. This is likely due to its ngram-based approach, which is able to identify that deleting a word breaks the fluency of a sentence. All of the metrics perform well on the “Re-order Chunks” corruption. METEOR, however, does slightly worse than the other metrics. We believe this to be due to its method of generating an alignment between the words in the candidate and reference sentences. This alignment is computed while minimizing the number of chunks of contiguous and identically ordered tokens in each sentence pair (Chen et al., 2015). Both the original sentence and the corruption contain the same chunks, so it makes sense that METEOR would have more trouble distinguishing between the two than the n-gram based approaches. 7.4 W2V-AVG The results for the W2V-AVG metric’s success on each corruption are shown in Table 7. “Shuffled Chunks” is one of the most interesting corruptions for this metric, because it achieves an accuracy of 0% across the board. The reason for this is that W2V-AVG is a pure bag-of-words model, meaning that word order is entirely ignored, and as a result the model cannot distinguish between the original sentence and its corruption, and so it can never rank the original greater than the corruption. Surprisingly, we find that W2V-AVG is far less fooled by active-to-passive than most other metrics. Again, we believe that this can be attributed to its bag-of-words approach, which ignores the word order imposed by active and passive voices. Because each version of the sentence will contain nearly all of the same tokens (with the exception of a few “is” and “being” tokens), the two sentence representations are very similar. In a sense, W2V-AVG does well on passive sentences for the wrong reasons rather than understanding that the semantics are unchanged, it simply observes that most of the words are the same. However, we still see the trend that performance goes down as the number of reference average word2vec metric num references 5 10 20 1. Negated Action 72.8 74.5 60.7 2. Antonym Replacement 91.5 93.0 92.3 3. Negated Subject 98.2 99.1 97.8 4. Active-to-Passive* 84.9 83.6 80.3 5. Synonymous Phrase* 98.3 99.1 98.3 6. DT Substitution* 100.0 100.0 100.0 7. Duplicate PP 87.6 87.6 87.6 8. Remove Head From PP 78.4 82.5 82.5 9. Shuffle Chunks 00.0 00.0 00.0 1. Negated Action* 100.0 100.0 100.0 3. Negated Subject* 82.8 87.3 87.1 Table 7: Performance of the AVG-W2V metric. These % accuracies represent the number of successful trials. Numbers referenced in the prose analysis are highlighted in bold. * indicates the scores computed with the Difference formula. sentences increases. Interestingly, we can see that although W2V-AVG achieved 98% accuracy on “Negated Subject”, it scored only 75% on “Negated Action”. This initially seems quite counter intuitive - either the model should be good at the insertion of a negation word, or it should be bad. The explanation for this reveals a bias in the data itself: in every instance where the “Negated Subject” corruption was applied, the sentence was transformed from “A/The [subject] is” to “There is no [subject]”. This is differs from the change in “Negated Action”, which is simply the insertion of “not” into the sentence before an action. Because one of these corruptions resulted in 3x more word replacements, the model is able to identify it fairly well. To confirm this, we added two final entries to Table 7 where we applied the Difference formula to the “Negated Subject” and “Negated Action” corruptions to see the fraction of sentence pairs whose scores are within 15% of one another. We found that, indeed, the “Negated Action” corruption scored 100% (meaning that the corruption embeddings were very similar to the original embeddings), while the “Negated Subject” corruption pairs were only similar about 85% of the time. By analyzing these interpretable errors, we can see that stop 1942 words play a larger role than we’d want in our toy metric. To develop a stronger metric, we might change W2V-AVG so that it considers only the content words when computing the centroid embeddings. 8 Conclusion The main contribution of this work is a novel approach for analyzing evaluation metrics for language generation tasks using Metric Unit TesTs. Not only is this evaluation procedure able to highlight particular metric weaknesses, it also demonstrates results which are far more consistent than correlation with human judgment; a good metric will be able to score well regardless of how noisy the human-derived scores are. Finally, we demonstrate the process of how this analysis can guide the development and strengthening of newly created metrics that are developed. References Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 17–53, Uppsala, Sweden, July. Association for Computational Linguistics. X. Chen, H. Fang, TY Lin, R. Vedantam, S. Gupta, P. Dollr, and C. L. Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Chin-Yew Lin. 2004. Rouge: a package for automatic evaluation of summaries. pages 25–26. Matous Machacek and Ondrej Bojar. 2014. Results of the wmt14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 293–301, Baltimore, Maryland, USA, June. Association for Computational Linguistics. Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 182–190, Montr´eal, Canada, June. Association for Computational Linguistics. M. Marelli, S. Menini, M. Baroni, L. Bentivogli, R. Bernardi, R. Zamparelli, and Fondazione Bruno Kessler. 2014. A sick cure for the evaluation of compositional distributional semantic models. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In In Proceedings of NIPS. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. Technical report, September. Steven Parker. 2008. Badger: A new machine translation metric. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223– 231. Matthew G. Snover, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. 2009. Ter-plus: Paraphrase, semantic, and alignment enhancements to translation edit rate. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In Advances in Neural Information Processing Systems 24. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2014. Cider: Consensusbased image description evaluation. CoRR, abs/1411.5726. 1943
2016
182
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1944–1953, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics N-gram language models for massively parallel devices Nikolay Bogoychev and Adam Lopez University of Edinburgh Edinburgh, United Kingdom Abstract For many applications, the query speed of N-gram language models is a computational bottleneck. Although massively parallel hardware like GPUs offer a potential solution to this bottleneck, exploiting this hardware requires a careful rethinking of basic algorithms and data structures. We present the first language model designed for such hardware, using B-trees to maximize data parallelism and minimize memory footprint and latency. Compared with a single-threaded instance of KenLM (Heafield, 2011), a highly optimized CPUbased language model, our GPU implementation produces identical results with a smaller memory footprint and a sixfold increase in throughput on a batch query task. When we saturate both devices, the GPU delivers nearly twice the throughput per hardware dollar even when the CPU implementation uses faster data structures. Our implementation is freely available at https://github.com/XapaJIaMnu/gLM 1 Introduction N-gram language models are ubiquitous in speech and language processing applications such as machine translation, speech recognition, optical character recognition, and predictive text. Because they operate over large vocabularies, they are often a computational bottleneck. For example, in machine translation, Heafield (2013) estimates that decoding a single sentence requires a million language model queries, and Green et al. (2014) estimate that this accounts for more than 50% of decoding CPU time. To address this problem, we turn to massively parallel hardware architectures, exempliFigure 1: Theoretical floating point performance of CPU and GPU hardware over time (Nvidia Corporation, 2015). fied by general purpose graphics processing units (GPUs), whose memory bandwidth and computational throughput has rapidly outpaced that of CPUs over the last decade (Figure 1). Exploiting this increased power is a tantalizing prospect for any computation-bound problem, so GPUs have begun to attract attention in natural language processing, in problems such as parsing (Canny et al., 2013; Hall et al., 2014), speech recognition (Chong et al., 2009; Chong et al., 2008), and phrase extraction for machine translation (He et al., 2015). As these efforts have shown, it is not trivial to exploit this computational power, because the GPU computational model rewards data parallelism, minimal branching, and minimal access to global memory, patterns ignored by many classic NLP algorithms (Section 2). We present the first language model data structure designed for this computational model. Our data structure is a trie in which individual nodes are represented by B-trees, which are searched in parallel (Section 3) and arranged compactly in 1944 memory (Section 4). Our experiments across a range of parameters in a batch query setting show that this design achieves a throughput six times higher than KenLM (Heafield, 2011), a highly efficient CPU implementation (Section 5). They also show the effects of device saturation and of data structure design decisions. 2 GPU computational model GPUs and other parallel hardware devices have a different computational profile from widely-used x86 CPUs, so data structures designed for serial models of computation are not appropriate. To produce efficient software for a GPU we must be familiar with its design (Figure 2). 2.1 GPU design A GPU consists of many simple computational cores, which have neither complex caches nor branch predictors to hide latencies. Because they have far fewer circuits than CPU cores, GPU cores are much smaller, and many more of them can fit on a device. So the higher throughput of a GPU is due to the sheer number of cores, each executing a single thread of computation (Figure 2). Each core belongs to a Streaming Multiprocessor (SM), and all cores belonging to a SM must execute the same instruction at each time step, with exceptions for branching described below. This execution model is very similar to single instruction, multiple data (SIMD) parallelism.1 Computation on a GPU is performed by an inherently parallel function or kernel, which defines a grid of data elements to which it will be applied, each processed by a block of parallel threads. Once scheduled, the kernel executes in parallel on all cores allocated to blocks in the grid. At minimum, it is allocated to a single warp—32 cores on our experimental GPU. If fewer cores are requested, a full warp is still allocated, and the unused cores idle. A GPU offers several memory types, which differ in size and latency (Table 1). Unlike a CPU program, which can treat memory abstractly, a GPU program must explicitly specify in which physical memory each data element resides. This choice has important implications for efficiency that entail design tradeoffs, since memory closer 1Due to differences in register usage and exceptions for branching, this model is not pure SIMD. Nvidia calls it SIMT (single instruction, multiple threads). Figure 2: GPU memory hierarchy and computational model (Nvidia Corporation, 2015). Memory type Latency Size Register 0 4B Shared 4–8 16KB–96KB Global GPU 200–800 2GB–12GB CPU 10K+ 16GB–1TB Table 1: Latency (in clock cycles) and size of different GPU memory types. Estimates are adapted from Nvidia Corporation (2015) and depend on several aspects of hardware configuration. to a core is small and fast, while memory further away is large and slow (Table 1). 2.2 Designing efficient GPU algorithms To design an efficient GPU application we must observe the constraints imposed by the hardware, which dictate several important design principles. Avoid branching instructions. If a branching instruction occurs, threads that meet the branch condition run while the remainder idle (a warp divergence). When the branch completes, threads that don’t meet the condition run while the first group idles. So, to maximize performance, code must be designed with little or no branching. Use small data structures. Total memory on a state-of-the-art GPU is 12GB, expected to rise to 24GB in the next generation. Language models that run on CPU frequently exceed these sizes, so our data structures must have the smallest possible memory footprint. Minimize global memory accesses. Data in the CPU memory must first be transferred to the device. This is very slow, so data structures must reside in GPU memory. But even when they 1945 Data structure Size Query Ease of Construction Lossless speed backoff time Trie (Heafield, 2011) Small Fast Yes Fast Yes Probing hash table (Heafield, 2011) Larger Faster Yes Fast Yes Double array (Yasuhara et al., 2013) Larger Fastest Yes Very slow Yes Bloom filter (Talbot and Osborne, 2007) Small Slow No Fast No Table 2: A survey of language model data structures and their computational properties. reside in global GPU memory, latency is high, so wherever possible, data should be accessed from shared or register memory. Access memory with coalesced reads. When a thread requests a byte from global memory, it is copied to shared memory along with many surrounding bytes (between 32 and 128 depending on the architecture). So, if consecutive threads request consecutive data elements, the data is copied in a single operation (a coalesced read), and the delay due to latency is incurred only once for all threads, increasing throughput. 3 A massively parallel language model Let w be a sentence, wi its ith word, and N the order of our model. An N-gram language model defines the probability of w as: P(w) = |w| Y i=1 P(wi|wi−1...wi−N+1) (1) A backoff language model (Chen and Goodman, 1999) is defined in terms of n-gram probabilities P(wi|wi−1...wi−n+1) for all n from 1 to N, which are in turn defined by n-gram parameters ˆP(wi...wi−n+1) and backoff parameters β(wi−1...wi−n+1). Usually ˆP(wi...wi−n+1) and β(wi−1...wi−n+1) are probabilities conditioned on wi−1...wi−n+1, but to simplify the following exposition, we will simply treat them as numeric parameters, each indexed by a reversed n-gram. If parameter ˆP(wi...wi−n+1) is nonzero, then: P(wi|wi−1...wi−n+1) = ˆP(wi...wi−n+1) Otherwise: P(wi|wi−1...wi−n+1) = P(wi|wi−1...wi−n+2) × β(wi−1...wi−n+1) This recursive definition means that the probability P(wi|wi−1...wi−N+1) required for Equation 1 may depend on multiple parameters. If r (< N) is the largest value for which ˆP(wi|wi−1...wi−r+1) is nonzero, then we have: P(wi|wi−1...wi−N+1) = (2) ˆP(wi...wi−r+1) N Y n=r+1 β(wi−1...wi−n+1) Our data structure must be able to efficiently access these parameters. 3.1 Trie language models With this computation in mind, we surveyed several popular data structures that have been used to implement N-gram language models on CPU, considering their suitability for adaptation to GPU (Table 2). Since a small memory footprint is crucial, we implemented a variant of the trie data structure of Heafield (2011). We hypothesized that its slower query speed compared to a probing hash table would be compensated for by the throughput of the GPU, a question we return to in Section 5. A trie language model exploits two important guarantees of backoff estimators: first, if ˆP(wi...wi−n+1) is nonzero, then ˆP(wi...wi−m+1) is also nonzero, for all m < n; second, if β(wi−1...wi−n+1) is one, then β(wi−1...wi−p+1) is one, for all p > n. Zero-valued ngram parameters and one-valued backoff parameters are not explicitly stored. To compute P(wi|wi−1...wi−N+1), we iteratively retrieve ˆP(wi...wi−m+1) for increasing values of m until we fail to find a match, with the final nonzero value becoming ˆP(wi...wi−r+1) in Equation 2. We then iteratively retrieve β(wi−1...wi−n+1) for increasing values of n starting from r + 1 and continuing until n = N or we fail to find a match, multiplying all retrieved terms to compute P(wi|wi−1...wi−N+1) (Equation 2). The trie is designed to execute these iterative parameter retrievals efficiently. Let Σ be a our vocabulary, Σn the set of all n-grams over the vocabulary, and Σ[N] the 1946 Australia is of one many one human Poland are is are is exist is Figure 3: Fragment of a trie showing the path of N-gram is one of in bold. A query for the N-gram every one of traverses the same path, but since every is not among the keys in the final node, it returns the n-gram parameter ˆP(of|one) and returns to the root to seek the backoff parameter β(every one). Based on image from Federico et al. (2008). set Σ1 ∪... ∪ΣN. Given an n-gram key wi...wi−n+1 ∈Σ[N], our goal is to retrieve value ⟨ˆP(wi...wi−n+1), β(wi...wi−n+1)⟩. We assume a bijection from Σ to integers in the range 1, ..., |Σ|, so in practice all keys are sequences of integers. When n = 1, the set of all possible keys is just Σ. For this case, we can store keys with nontrivial values in a sorted array A and their associated values in an array V of equal length so that V [j] is the value associated with key A[j]. To retrieve the value associated with key k, we seek j for which A[j] = k and return V [j]. Since A is sorted, j can be found efficiently with binary or interpolated search (Figure 4). When n > 1, queries are recursive. For n < N, for every wi...wi−n+1 for which ˆP(wi...wi−n+1) > 0 or β(wi...wi−n+1) < 1, our data structure contains associated arrays Kwi...wi−n+1 and Vwi...wi−n+1. When key k is located in Awi...wi−n+1[j], the value stored at Vwi...wi−n+1[j] includes the address of arrays Awi...wi−n+1k and Vwi...wi−n+1k. To find the values associated with an n-gram wi...wi−n+1, we first search the root array A for j1 such that A[j1] = wi. We retrieve the address of Awi from V [j1], and we then search for j2 such that Awi[j2] = wi−1. We continue to iterate this process until we find the value associated with the longest suffix of our ngram stored in the trie. We therefore iteratively retrieve the parameters needed to compute Equation 2, returning to the root exactly once if backoff parameters are required. 3.1.1 K-ary search and B-trees On a GPU, the trie search algorithm described above is not efficient because it makes extensive use of binary search, an inherently serial algorithm. However, there is a natural extension of binary search that is well-suited to GPU: K-ary search (Hwu, 2011). Rather than divide an array in two as in binary search, K-ary search divides it into K equal parts and performs K −1 comparisons simultaneously (Figure 5). To accommodate large language models, the complete trie must reside in global memory, and in this setting, K-ary search on an array is inefficient, since the parallel threads will access nonconsecutive memory locations. To avoid this, we require a data structure that places the K elements compared by K-ary search in consecutive memory locations so that they can be copied from global to shared memory with a coalesced read. This data structure is a B-tree (Bayer and McCreight, 1970), which is widely used in databases, filesystems and information retrieval. Informally, a B-tree generalizes binary trees in exactly the same way that K-ary search generalizes binary search (Figure 6). More formally, a B-tree is a recursive data structure that replaces arrays A and V at each node of the trie. A Btree node of size K consists of three arrays: a 1indexed array B of K −1 keys; a 1-indexed array V of K −1 associated values so that V [j] is the value associated with key B[j]; and, if the node is not a leaf, a 0-indexed array C of K addresses to child B-trees. The keys in B are sorted, and the subtree at address pointed to by child C[j] represents only key-value pairs for keys between B[j] and B[j + 1] when 1 ≤j < K, keys less than B[1] when j = 0, or keys greater than B[K] when j = K. To find a key k in a B-tree, we start at the root node, and we seek j such that B[j] ≤k < B[j + 1]. If B[j] = k we return V [j], otherwise if the node is not a leaf node we return the result of recursively querying the B-tree node at the address C[j] (C[0] if k < B[1] or C[K] if k > B[K]). If the key is not found in array B of a leaf, the query fails. Our complete data structure is a trie in which each node except the root is a B-tree (Figure 7). Since the root contains all possible keys, its keys are simply represented by an array A, which can be indexed in constant time without any search. 1947 Figure 4: Execution of a binary search for key 15. Each row represents a time step and highlights the element compared to the key. Finding key 15 requires four time steps and four comparisons. Figure 5: Execution of K-ary search with the same input as Figure 4, for K = 8. The first time step executes seven comparisons in parallel, and the query is recovered in two time steps. Figure 6: In a B-tree, the elements compared in K-ary search are consecutive in memory. We also show the layout of an individual entry. 4 Memory layout and implementation Each trie node represents a unique n-gram wi...wi−n+1, and if a B-tree node within the trie node contains key wi−n, then it must also contain the associated values ˆP(wi...wi−n), β(wi...wi−n), and the address of the trie node representing wi...wi−n (Figure 6, Figure 3). The entire language model is laid out in memory as a single byte array in which trie nodes are visited in breadth-first order and the B-tree representation of each node is also visited in breadth-first order (Figure 7). Since our device has a 64-bit architecture, pointers can address 18.1 exabytes of memory, far more than available. To save space, our data structure does not store global addresses; it instead stores the difference in addresses between the parent node and each child. Since the array is aligned to four bytes, these relative addresses are divided by four in the representation, and multiplied by four at runtime to obtain the true offset. This enables us to encode relative addresses of 16GB, still larger than the actual device memory. We estimate that relative addresses of this size allow us to store a model containing around one billion ngrams.2 Unlike CPU language model implementations such as those of Heafield (2011) and Watanabe et al. (2009), we do not employ further compression techniques such as variable-byte encoding or LOUDS, because their runtime decompression algorithms require branching code, which our implementation must avoid. We optimize the node representation for coalesced reads by storing the keys of each B-tree consecutively in memory, followed by the corresponding values, also stored consecutively (Figure 6). When the data structure is traversed, only key arrays are iteratively copied to shared memory until a value array is needed. This design minimizes the number of reads from global memory. 4.1 Construction The canonical B-tree construction algorithm (Cormen et al., 2009) produces nodes that are not fully saturated, which is desirable for B-trees that 2We estimate this by observing that a model containing 423M n-grams takes 3.8Gb of memory, and assuming an approximately linear scaling, though there is some variance depending on the distribution of the n-grams. 1948 Figure 7: Illustration of the complete data structure, showing a root trie node as an array representing unigrams, and nine B-trees, each representing a single trie node. The trie nodes are numbered according to the order in which they are laid out in memory. Figure 8: Layout of a single B-tree node for K = 4. Relative addresses of the four child B-tree nodes (array C) are followed by three keys (array B), and three values (array V ), each consisting of an n-gram probability, backoff, and address of the child trie node. support insertion. However, our B-trees are immutable, and unsaturated nodes of unpredictable size lead to underutilization of threads, warp divergence, and deeper trees that require more iterations to query. So, we use a construction algorithm inspired by Cesarini and Soda (1983) and Rosenberg and Snyder (1981). It is implemented on CPU, and the resulting array is copied to GPU memory to perform queries. Since the entire set of keys and values is known in advance for each n-gram, our construction algorithm receives them in sorted order as the array A described in Section 3.1. The procedure then splits this array into K consecutive subarrays of equal size, leaving K −1 individual keys between each subarray.3 These K−1 keys become the keys of the root B-tree. The procedure is then applied recursively to each subarray. When applied to an array whose size is less than K, the algorithm returns a leaf node. When applied to an array whose 3Since the size of the array may not be exactly divisible by K, some subarrays may differ in length by one. size is greater than or equal to K but less than 2K, it splits the array into a node containing the first K −1 keys, and a single leaf node containing the remaining keys, which becomes a child of the first. 4.2 Batch queries To fully saturate our GPU we execute many queries simultaneously. A grid receives the complete set of N-gram queries and each block processes a single query by performing a sequence of K-ary searches on B-tree nodes. 5 Experiments We compared our open-source GPU language model gLM with the CPU language model KenLM (Heafield, 2011).45 KenLM can use two quite different language model data structures: a fast probing hash table, and a more compact but slower trie, which inspired our own language model design. Except where noted, our B-tree 4https://github.com/XapaJIaMnu/gLM 5https://github.com/kpu/kenlm/commit/9495443 1949 node size K = 31, and we measure throughput in terms of query speed, which does not include the cost of initializing or copying data structures, or the cost of moving data to or from the GPU. We performed our GPU experiments on an Nvidia Geforce GTX, a state-of-the-art GPU, released in the first quarter of 2015 and costing 1000 USD. Our CPU experiments were performed on two different devices: one for single-threaded tests and one for multi-threaded tests. For the singlethreaded CPU tests, we used an Intel Quad Core i7 4720HQ CPU released in the first quarter of 2015, costing 280 USD, and achieving 85% of the speed of a state-of-the-art consumer-grade CPU when single-threaded. For the multi-threaded CPU tests we used two Intel Xeon E5-2680 CPUs, offering a combined 16 cores and 32 threads, costing at the time of their release 3,500 USD together. Together, their performance specifications are similar to the recently released Intel Xeon E5-2698 v3 (16 cores, 32 threads, costing 3,500USD). The different CPU configurations are favorable to the CPU implementation in their tested condition: the consumer-grade CPU has higher clock speeds in single-threaded mode than the professional-grade CPU; while the professional-grade CPUs provide many more cores (though at lower clock speeds) when fully saturated. Except where noted, CPU throughput is reported for the single-threaded condition. Except where noted, our language model is the Moses 3.0 release English 5-gram language model, containing 88 million n-grams.6 Our benchmark task computes perplexity on data extracted from the Common Crawl dataset used for the 2013 Workshop on Machine Translation, which contains 74 million words across 3.2 million sentences.7 Both gLM and KenLM produce identical perplexities, so we are certain that our implementation is correct. Except where noted, the faster KenLM Probing backend is used. The perplexity task has been used as a basic test of other language model implementations (Osborne et al., 2014; Heafield et al., 2015). 5.1 Query speed When compared to single-threaded KenLM, our results (Table 3) show that gLM is just over six 6http://www.statmt.org/moses/RELEASE-3.0/models/fren/lm/europarl.lm.1 7http://www.statmt.org/wmt13/training-parallelcommoncrawl.tgz LM (threads) Throughput Size (GB) KenLM probing (1) 10.3M 1.8 KenLM probing (16) 49.8M 1.8 KenLM probing (32) 120.4M 1.8 KenLM trie (1) 4.5M 0.8 gLM 65.5M 1.2 Table 3: Comparison of gLM and KenLM on throughput (N-gram queries per second) and data structure size. times faster than the fast probing hash table, and nearly fifteen times faster than the trie data structure, which is quite similar to our own, though slightly smaller due to the use of compression. The raw speed of the GPU is apparent, since we were able to obtain our results with a relatively short engineering effort when compared to that of KenLM, which has been optimized over several years. When we fully saturate our professional-grade CPU, using all sixteen cores and sixteen hyperthreads, KenLM is about twice as fast as gLM. However, our CPU costs nearly four times as much as our GPU, so economically, this comparison favors the GPU. On first glance, the scaling from one to sixteen threads is surprisingly sublinear. This is not due to vastly different computational power of the individual cores, which are actually very similar. It is instead due to scheduling, cache contention, and—most importantly—the fact that our CPUs implement dynamic overclocking: the base clock rate of 2.7 GHz at full saturation increases to 3.5 GHz when the professional CPU is underutilized, as when single-threaded; the rates for the consumer-grade CPU similarly increase from 2.6 to 3.6 GHz.8 5.2 Effect of B-tree node size What is the optimal K for our B-tree node size? We hypothesized that the optimal size would be one that approaches the size of a coalesced memory read, which should allow us to maximize parallelism while minimizing global memory accesses and B-tree depth. Since the size of a coalesced read is 128 bytes and keys are four bytes, we hypothesized that the optimal node size would be around K = 32, which is also the size of a warp. We tested this by running experiments 8Intel calls this Intel Turbo Boost. 1950 Figure 9: Effect of BTree node size on throughput (ngram queries per second) that varied K from 5 to 59, and the results (Figure 9) confirmed our hypothesis. As the node size increases, throughput increases until we reach a node size of 33, where it steeply drops. This result highlights the importance of designing data structures that minimize global memory access and maximize parallelism. We were curious about what effect this node size had on the depth of the B-trees representing each trie node. Measuring this, we discovered that for bigrams, 88% of the trie nodes have a depth of one—we call these B-stumps, and they can be exhaustively searched in a single parallel operation. For trigrams, 97% of trie nodes are B-stumps, and for higher order n-grams the percentage exceeds 99%. 5.3 Saturating the GPU A limitation of our approach is that it is only effective in high-throughput situations that continually saturate the GPU. In situations where a language model is queried only intermittently or only in short bursts, a GPU implementation may not be useful. We wanted to understand the point at which this saturation occurs, so we ran experiments varying the batch size sent to our language model, comparing its behavior with that of KenLM. To understand situations in which the GPU hosts the language model for query by an external GPU, we measure query speed with and without the cost of copying queries to the device. Our results (Figure 10) suggest that the device is nearly saturated once the batch size reaches a thousand queries, and fully saturated by ten thousand queries. Throughput remains steady as batch size increases beyond this point. Even with the cost of copying batch queries to GPU memory, Figure 10: Throughput (N-gram queries per second) vs. batch size for gLM, KenLM probing, and KenLM trie. Regular LM Big LM KenLM 10.2M 8.2M KenLM Trie 4.5M 3.0M gLM 65.5M 55M Table 4: Throughput comparison (ngram queries per second) between gLM and KenLM with a 5 times larger model and a regular language model. throughput is more than three times higher than that of single threaded KenLM. We have not included results of multi-threaded KenLM scaling on Figure 10 but they are similar to the singlethreaded case: throughput (as shown on Table 3) plateaus at around one hundred sentences per thread. 5.4 Effect of model size To understand the effect of model size on query speed, we built a language model with 423 million n-grams, five times larger than our basic model. The results (Table 4) show an 18% slowdown for gLM and 20% slowdown for KenLM, showing that model size affects both implementations similarly. 5.5 Effect of N-gram order on performance All experiments so far use an N-gram order of five. We hypothesized that lowering the ngram order of the model would lead to faster query time (Table 5). We observe that N-gram order affects throughput of the GPU language model much more than the CPU one. This is likely due to effects of backoff queries, which are more optimized in KenLM. At higher orders, more backoff queries occur, which reduces throughput for gLM. 1951 5-gram 4-gram 3-gram KenLM 10.2M 9.8M 11.5M KenLM Trie 4.5M 4.5M 5.2M gLM 65.5M 71.9M 93.7M Table 5: Throughput comparison (ngram queries per second) achieved using lower order ngram models. 5.6 Effect of templated code Our implementation initially relied on hard-coded values for parameters such as B-tree node size and N-gram order, which we later replaced with parameters. Surprisingly, we observed that this led to a reduction in throughput from 65.6 million queries per second to 59.0 million, which we traced back to the use of dynamically allocated shared memory, as well as compiler optimizations that only apply to compile-time constants. To remove this effect, we heavily templated our code, using as many compile-time constants as possible, which improves throughput but enables us to change parameters through recompilation. 5.7 Bottlenecks: computation or memory? On CPU, language models are typically memorybound: most cycles are spent in random memory accesses, with little computation between accesses. To see if this is true in gLM we experimented with two variants of the benchmark in Figure 3: one in which the GPU core was underclocked, and one in which the memory was underclocked. This effectively simulates two variations in our hardware: A GPU with slower cores but identical memory, and one with slower memory, but identical processing speed. We found that throughput decreases by about 10% when underclocking the cores by 10%. On the other hand, underclocking memory by 25% reduced throughput by 1%. We therefore conclude that gLM is computation-bound. We expect that gLM will continue to improve on parallel devices offering higher theoretical floating point performance. 6 Conclusion Our language model is implemented on a GPU, but its general design (and much of the actual code) is likely to be useful to other hardware that supports SIMD parallelism, such as the Xeon Phi. Because it uses batch processing, our on-chip language model could be integrated into a machine translation decoder using strategies similar to those used to integrate an on-network language model nearly a decade ago (Brants et al., 2007). An alternative method of integration would be to move the decoder itself to GPU. For phrase-based translation, this would require a translation model and dynamic programming search algorithm on GPU. Translation models have been implemented on GPU by He et al. (2015), while related search algorithms for (Chong et al., 2009; Chong et al., 2008) and parsing (Canny et al., 2013; Hall et al., 2014) have been developed for GPU. We intend to explore these possibilities in future work. Acknowledgements This work was conducted within the scope of the Horizon 2020 Innovation Action Modern MT, which has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 645487. We thank Kenneth Heafield, Ulrich Germann, Rico Sennrich, Hieu Hoang, Federico Fancellu, Nathan Schneider, Naomi Saphra, Sorcha Gilroy, Clara Vania and the anonymous reviewers for productive discussion of this work and helpful comments on previous drafts of the paper. Any errors are our own. References R. Bayer and E. McCreight. 1970. Organization and maintenance of large ordered indices. In Proceedings of the 1970 ACM SIGFIDET (Now SIGMOD) Workshop on Data Description, Access and Control, SIGFIDET ’70, pages 107–141. T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean. 2007. Large language models in machine translation. In In Proceedings of EMNLP-CoNLL. J. Canny, D. Hall, and D. Klein. 2013. A multi-teraflop constituency parser using GPUs. In Proceedings of EMNLP. F. Cesarini and G. Soda. 1983. An algorithm to construct a compact B-tree in case of ordered keys. Information Processing Letters, 17(1):13–16. S. F. Chen and J. Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359–393. J. Chong, Y. Yi, A. Faria, N. R. Satish, and K. Keutzer. 2008. Data-parallel large vocabulary continuous 1952 speech recognition on graphics processors. Technical Report UCB/EECS-2008-69, EECS Department, University of California, Berkeley, May. J. Chong, E. Gonina, Y. Yi, and K. Keutzer. 2009. A fully data parallel WFST-based large vocabulary continuous speech recognition on a graphics processing unit. In Proceedings of Interspeech. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. 2009. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition. M. Federico, N. Bertoldi, and M. Cettolo. 2008. IRSTLM: an open source toolkit for handling large scale language models. In Proceedings of Interspeech, pages 1618–1621. ISCA. S. Green, D. Cer, and C. Manning. 2014. Phrasal: A toolkit for new directions in statistical machine translation. In Proceedings of WMT. D. Hall, T. Berg-Kirkpatrick, and D. Klein. 2014. Sparser, better, faster GPU parsing. In Proceedings of ACL. H. He, J. Lin, and A. Lopez. 2015. Gappy pattern matching on GPUs for on-demand extraction of hierarchical translation grammars. TACL, 3:87–100. K. Heafield, R. Kshirsagar, and S. Barona. 2015. Language identification and modeling in specialized hardware. In Proceedings of ACL-IJCNLP, July. K. Heafield. 2011. KenLM: faster and smaller language model queries. In Proceedings of WMT, pages 187–197, July. K. Heafield. 2013. Efficient Language Modeling Algorithms with Applications to Statistical Machine Translation. Ph.D. thesis, Carnegie Mellon University, September. W.-m. W. Hwu. 2011. GPU Computing Gems Emerald Edition. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1st edition. Nvidia Corporation. 2015. Nvidia CUDA Compute Unified Device Architecture Programming Guide. Nvidia Corporation. https://docs.nvidia.com/cuda/cuda-c-programmingguide/. M. Osborne, A. Lall, and B. V. Durme. 2014. Exponential reservoir sampling for streaming language models. In Proceedings of ACL, pages 687–692. A. L. Rosenberg and L. Snyder. 1981. Time- and space-optimality in B-trees. ACM Trans. Database Syst., 6(1):174–193, Mar. D. Talbot and M. Osborne. 2007. Smoothed Bloom filter language models: Tera-scale LMs on the cheap. In Proceedings of EMNLP-CoNLL, pages 468–476. T. Watanabe, H. Tsukada, and H. Isozaki. 2009. A succinct N-gram language model. In Proc. of ACLIJCNLP. M. Yasuhara, T. Tanaka, J. ya Norimatsu, and M. Yamamoto. 2013. An efficient language model using double-array structures. In EMNLP, pages 222–232. 1953
2016
183